The usage of my GPU memory is always low?

The usage of my GPU memory is always low.
Whatever how much my batch size increased, it is using about 224MB all the time, which mays my model’s size?
I used .cuda() for the input_data, model and labels.
And I check the data and the ‘is_cuda’ is True, but the GPU memory is still low. (BTW I read the data using torch.from_numpy),
And the running of every epoch of my model really fast …
The GPU usage rate is OK, which is range from 40% to 10%.
So what’s wrong with this?
Any other way to check this problem? Thanks.

What kind of model do you use?
If it’s on the GPU the memory usage should eventually be grow.
If you print the weights of one layer, does it show cuda in the Tensor type?

I use multiple Linear model. Yes the the parameters are cuda.

Could you just blow up the units and check it again?
Just multiply the number of hidden units with 10 or 100 and check it again until you run into an out of memory error. If nvidia-smi still shows a low memory usage, something should be wrong with it.

Thank, I found that my dataset have an error in opening a numpy data file. It is strange that my code can still run…:sweat_smile: