RunTime Error, Cuda


I am training a pretrained model resnet-18 imported from torchvision.models with dataset containing 1050 images of size 3x240x320. After training, I am testing with 399 samples, but I am getting RunTime CUDA Error : Out of Memory. Also, I have ported the test dataset to CUDA and volatile attribute is set to True. The model is in GPU and after training nvidia-smi output is 3243MiB/4038MiB

Is there any way available to free the GPU memory so that I can do the testing?

it’s possible that moving both the training and test dataset over to CUDA doesn’t give enough space for the network to forward-prop through. Loading your dataset into GPU memory itself takes 1.3 GB of memory.

Are you giving a large batch size as input to your network? Can you reduce the batch size at inference time?

1 Like

Yes I was giving a large batch size, so I reduced the batch size at inference time. It is working fine now. Thank you very much.

Also, remember to use the volatile flag for inference! It will greately reduce the memory usage.