I have defined a U-net as well as training code, however I don’t think it is training with CUDA (using GPU’s). In my limited experience with tf, I know that as I began training with CUDA, it would output a message stating that CUDA is running, devices were identified etc. Is there something similar for pytorch? I am just going based on the fact that GPU utilization is very low.
I have included a few lines in my code that I thought would encourage CUDA use, but I don’t beieve it’s worked ie.
This screenshot shows the Nvidia GPU. I have made CUDA work in tensorflow before, so I know my CUDA is setup correctly- I assume the issue is related to my code and the way I’m calling it.
I think there are know issue with the windows task manager that does not report cuda use of the GPU. And so it is expected to remain close to 0% usage in there.
You can use cli tools like nvidia-smi to see what happens.
Or temps? 71C doesn’t look like an idle card