Model.cuda() and this lead to an error saying that cuda out of memory

Hi, I used the following code:
model = torchvision.models.resnet50(pretrained=True)
model.cuda()
Then it returned an error message saying that cuda out of memory. Can anybody tell me how to fix this problem?

Could you check the free GPU memory via nvidia-smi and make your sure device has enough free memory to store the resnet50 on it?

Hi, the error message looks like the following:
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 10.92 GiB total capacity; 10.34 GiB already allocated; 9.38 MiB free; 10.34 GiB reserved in total by PyTorch)
It occured when executing the following code:
for i_batch, sample_batched in enumerate(TraindataLoader):
local_X = sample_batched[‘input’].type(dtype)
local_Y = sample_batched[‘EEGResponse’].type(dtype)
yhat = model(local_X) -----error occurred

looking forward to your suggestion.

As the error message describes, your device has only 9.38MiB of free memory, while you are trying to allocate more, so you would need to reduce the memory requirement e.g. by lowering the batch size.