Hi! I have a doubt about how the torch.cuda.is_available() works.
While training my network, I usually use the code:
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
network.to(device)
data.to(device)
...
But I found that torch.cuda.is_available() is still True when the network is being trained.
I am not sure why this happens. Does this mean that the code isn’t running on a GPU?
Hi, torch.cuda.is_available() is just telling you whether or not cuda is available. If it is available, that flag will remain True throughout your program.
Thanks. I know this command could work.
By the way, do you think that I can train two networks with the same GPU at the same time?
I just found that no error is returned while doing this, but I thought that the GPU couldn’t handle two tasks at the same time.
You can do it, that’s no problem.
Now wether it’s going to be faster than running the two one after the other is very hard to guess. Depends on how much each task use the GPU. You will need to check.