GPU utility 0% while is_cuda() returns True

In the code, I make sure to model.cuda() and Variable(x.type(dtype)) where dtype=torch.cuda.FloatTensor.
I also print x.is_cuda() and next(model.parameters()).is_cuda(), both are True.

So it seems that everything is on gpu. But when I execute the program (which runs model(x)) then monitor the gpu usage with watch -n 1 nvidia-smi, the Volatile GPU-Util shows 0%. So it looks like gpu is not working, and the processing speed is indeed too slow, plus that I don’t feel the heat from my video card. I exclude the possibility that my card doesn’t support CUDA because it was working until I work with the customized dataloader for my dataset.

Is there anything I am missing, in order to make sure the program will be using GPU?

Maybe your data loading is slow so CPU becomes the bottleneck?

Thanks for your reply, it might be.
But is that going to cause 0% GPU utility all the time? After all at least model forward should have been on GPU. Sorry I don’t understand the interaction between CPU and GPU.

Yeah, but loading in on CPU. If data is not loaded, you can’t forward the model.

But I actually check and print the output type with is_cuda() after the line of model(), and see it returns True. So the data batch is loaded and has been through the forward part, right?

Have you solve this problem? I have the same problem with my own collate_fn, and wondering if it is the problem。。

Maybe you should check the state of CPU and hard disk to see whether they are overload.