CUDA out of memory with colab

I am working on a classification problem and using Google Colab for the implementation. I am using transfer learning and specifically using ResNet at the moment. I have 5955 images in the train set. When I am running my code, I face the following error:

RuntimeError: CUDA out of memory. Tried to allocate 1.53 GiB (GPU 0; 14.76 GiB total capacity; 12.24 GiB already allocated; 1.27 GiB free; 12.44 GiB reserved in total by PyTorch)

I have tried to clear out cache using

device = 'cuda'
import torch, gc
import os
gc.collect()
torch.cuda.empty_cache()

It does not seem to work either. I have a batch size of 512 for train loader, 256 for valid and test loader.

1)Try to reduce batch size.
What’s the image size you are using?

Go to Runtime => Restart runtime
Check GPU memory usage by entering the following command:
!nvidia-smi
if it is 00 MiB then run the training function again.

Its 224x224. I reduced the batch size from 512 to 64. But I do not understand why that worked.

Batch size is basically how many samples are being propagated in a network at a time, so it makes it a function of a GPU memory available. The higher the batch size, the higher the memory required.