In the code, I make sure to model.cuda() and Variable(x.type(dtype)) where dtype=torch.cuda.FloatTensor.
I also print x.is_cuda() and next(model.parameters()).is_cuda(), both are True.
So it seems that everything is on gpu. But when I execute the program (which runs model(x)) then monitor the gpu usage with watch -n 1 nvidia-smi, the Volatile GPU-Util shows 0%. So it looks like gpu is not working, and the processing speed is indeed too slow, plus that I don’t feel the heat from my video card. I exclude the possibility that my card doesn’t support CUDA because it was working until I work with the customized dataloader for my dataset.
Is there anything I am missing, in order to make sure the program will be using GPU?
Thanks for your reply, it might be.
But is that going to cause 0% GPU utility all the time? After all at least model forward should have been on GPU. Sorry I don’t understand the interaction between CPU and GPU.
But I actually check and print the output type with is_cuda() after the line of model(), and see it returns True. So the data batch is loaded and has been through the forward part, right?