Torch.cuda.is_available() is True while I am using the GPU

Hi! I have a doubt about how the torch.cuda.is_available() works.
While training my network, I usually use the code:

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
network.to(device)
data.to(device)
...

But I found that torch.cuda.is_available() is still True when the network is being trained.
I am not sure why this happens. Does this mean that the code isn’t running on a GPU?

Hi,
torch.cuda.is_available() is just telling you whether or not cuda is available. If it is available, that flag will remain True throughout your program.

2 Likes

Thanks a lot! I thought it was a way to find out whether I can use the GPU.
Is there any way I can find out if my program is really using the GPU?

You can use nvidia-smi in your command line to check that your program is using some ressourses on your gpu.

Thanks. I know this command could work.
By the way, do you think that I can train two networks with the same GPU at the same time?
I just found that no error is returned while doing this, but I thought that the GPU couldn’t handle two tasks at the same time.

You can do it, that’s no problem.
Now wether it’s going to be faster than running the two one after the other is very hard to guess. Depends on how much each task use the GPU. You will need to check.

1 Like

Okay! Thanks a lot! That’s very helpful.

*My return value is true but I can’t call the gpu,why?