Running Pytorch on GPU

Hello, I know that this is a question probably asked quite often on this forum, but after going through the forum posts, I still am perplexed and could use some clarification. I am running Pytorch 1.6.0 on Ubuntu 20.04 with Cuda Toolkit 11.0 installed (hopefully, that’s all I needed to get everything running smoothly), and I am running some models on my local GPU.

Some models are as small as a few dense layers while I have also done transfer learning with the densenet121 model. I can tell that my GPU is being used via Nvidia X Server and by printing my Tensors to see that they have been loaded onto my GPU. However, I can also see that my CPU usage has increased significantly (to a much more significant degree on the densenet121 model of course) even when the GPU is being used. Is this normal?

it’s normal. the GPU is driven by CPU code, and it’s normal for the CPU to spin while driving the GPU