Cuda out of memory while loading model weights

I have trained a custom model in pytorch and saved the weights using, path_to_model). When I try to load the model for testing by net.load_state_dict(torch.load(path_to_model)),
cuda runtime error (2) : out of memory was raised. The GPU had 12 GB free space while I was trying to load the weights -

I specified the gpu device to use by torch.cuda.device(1) and initialized the model by net = my_net(3, 1).cuda(1). I have tried all the permutations (00, 01, 10, 11) of gpu indexes to make sure that it is not an indexing error. I have also noticed that even though I was trying to use GTX Titan, the process was consuming 233 mb in GTX- 1080 TI and displaying this warning -


I didn’t face any issue while training. Any idea why this is happening?

Thanks in advance!

I think:

A. you need to update your CUDA drivers.

B. then, I suggest you install the pytorch binaries again

conda install pytorch torchvision -c pytorch

Ever found a solution ? My problem is similar