I have setup pytorch and cuda in my windows 11 laptop that has anaconda installed.
It seems that it’s working, as torch.cuda.is_available() returns True
On top of that, my code ensures to move the model and tensors to the default device (I have coded device agnostic code, using device = "cuda" if torch.cuda.is_available() else "cpu". This same code is running fine (in GPU) in google colab
Even though I see this, when I train the model I see my laptop’s CPU utilization near 100% and the GPU utilization barely vary…
I guess you are using the task manager to check the GPU util. which is misleading on Windows as it shows the video (and other resources) instead of the compute. Select the right “compute” view or use nvidia-smi to check the memory usage as well as the GPU util. If it’s still low, profile your code and check where the bottleneck is as your CPU might block the GPU execution.
Indeed I was using task manager to check the GPU util. Sorry but what do you mean with the right “compute” view? I would like to see its utilization in real time (with nvidia-smi util I only obtain a “screenshot” of it)
By the way, I guess it works (tried the same training with the cpu and it takes more than twice long…)!