CUDA Available, moved model to Cuda:0. Still runs on CPU

Hi I’m trying to get my model to run on the GPU.

I am using this example code: PyTorch Profiler With TensorBoard — PyTorch Tutorials 1.9.0+cu102 documentation
except in mine I call torch.cuda.is_available() to check it is available which returns True.

In Tensorboard however I can see it is running on CPU and my CPU usage goes up…

What am I missing here? I’m using a GTX 1080TI on windows 10. Thanks

If you’ve moved the model as well as the inputs to the GPU, it would be used (I don’t know what exactly TensorBoard is showing).
You could verify it by checking the device argument of some parameters of the model and check the memory usage as well as the utilization of the GPU via nvidia-smi.

Tensor board shows
my model and training data are placed onto cuda using cuda:0

Printing the parameters device outputs cuda:0.

Thanks for your help

sorry I replied but now sure you have seen it :open_mouth:

Unfortunately, I don’t know what Tensorboard tries to show as the “Device Type”. Were you able to check nvidia-smi for the memory usage and utilization?