Cuda out of memory while loading model weights

Hello,
I have trained a custom model in pytorch and saved the weights using torch.save(net.state_dict(), path_to_model). When I try to load the model for testing by net.load_state_dict(torch.load(path_to_model)),
cuda runtime error (2) : out of memory was raised. The GPU had 12 GB free space while I was trying to load the weights -
1

I specified the gpu device to use by torch.cuda.device(1) and initialized the model by net = my_net(3, 1).cuda(1). I have tried all the permutations (00, 01, 10, 11) of gpu indexes to make sure that it is not an indexing error. I have also noticed that even though I was trying to use GTX Titan, the process was consuming 233 mb in GTX- 1080 TI and displaying this warning -

2

I didn’t face any issue while training. Any idea why this is happening?

Thanks in advance!

I think:

A. you need to update your CUDA drivers.

https://xcat-docs.readthedocs.io/en/stable/advanced/gpu/nvidia/verify_cuda_install.html

https://devtalk.nvidia.com/default/topic/1027653/how-do-i-check-if-i-install-cuda-and-cudnn-successfully-/

B. then, I suggest you install the pytorch binaries again

conda install pytorch torchvision -c pytorch

Ever found a solution ? My problem is similar