Pytorch is only using GPU for vram, not for actual compute

Note, when I’m running a test, I’m using this script: examples/mnist/main.py at main · pytorch/examples · GitHub
with no arguments, only python main.py

I’m currently on windows, and I’m installing PyTorch in a sandboxed anaconda environment python 3.6 with this command: conda install pytorch torchvision cudatoolkit=10.0 -c pytorch
Here is the screenshot showing that my gpu is detected:


And here is a screenshot showing that the cpu is being fully used and the gpu is only being used for Vram, not the actual compute:

How do I fix this issue? Am I installing with the wrong command?

Hi,

I think the reporting of the task manager on windows does not report CUDA usage. And so you will never see high usage there by running compute intensive tasks.
@peterjc123 might be able to give a more detailed explanation of this?

https://devblogs.microsoft.com/directx/gpus-in-the-task-manager/
It is normal, see this post for more details.

1 Like

If you really want to see that performance counter,then you should use nvidia-smi.

Thank you for your help!