I have setup pytorch and cuda in my windows 11 laptop that has anaconda installed.
It seems that it’s working, as
On top of that, my code ensures to move the model and tensors to the default device (I have coded device agnostic code, using
device = "cuda" if torch.cuda.is_available() else "cpu". This same code is running fine (in GPU) in google colab
Even though I see this, when I train the model I see my laptop’s CPU utilization near 100% and the GPU utilization barely vary…
Any idea why could be?
I guess you are using the task manager to check the GPU util. which is misleading on Windows as it shows the video (and other resources) instead of the compute. Select the right “compute” view or use
nvidia-smi to check the memory usage as well as the GPU util. If it’s still low, profile your code and check where the bottleneck is as your CPU might block the GPU execution.
Hi! Thanks a lot for the response
Indeed I was using task manager to check the GPU util. Sorry but what do you mean with the right “compute” view? I would like to see its utilization in real time (with
nvidia-smi util I only obtain a “screenshot” of it)
By the way, I guess it works (tried the same training with the cpu and it takes more than twice long…)!
Before closing this topic, how do you monitor the GPU usage? I don’t know what you mean with the right “compute” view.
Thanks a lot
I’m not using Windows, but this post might be helpful.
Thanks a lot for the post! Unfortunately I cannot see that
Unfortunately, I don’t have an idea about Windows so you would need to use
nvidia-smi and/or wait for Windows experts to chime in.