About GPU(CUDA)

Hello, I have a question about using pytorch with gpu. I have got torch.cuda.is_available()=true and torch.cuda.get_device_name(0) = gtx 1060, also set the cnn = cnn.cuda() and x = Variable(x)
return x.cuda() if torch.cuda.is_available() else x, but when I check the GPU usage, it is 0% during training, and the progress is so slow. When I use tensorflow or keras, I have got 20~30% gpu usage and the progress is much more faster. Am I doing anything wrong with the settings? Thanks.

The GPU utilization depends on the workload. E.g. if your model is quite small you might see a low GPU utilization throughout your training. On the other hand you might see some peak utilization if e.g. the data loading is a bottleneck.
Could you share some code so that we can have a look at possible issues?

As a small side note: Variables are deprecated since PyTorch 0.4.0, so you can just use torch.tensors in newer versions.

1 Like