Memory Management using PYTORCH_CUDA_ALLOC_CONF

Can I do anything about this, while training a model I am getting this cuda error:

RuntimeError: CUDA out of memory. Tried to allocate 30.00 MiB (GPU 0; 2.00 GiB total capacity; 1.72 GiB already allocated; 0 bytes free; 1.74 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Reduced batch_size from 32 to 8, Can I do anything else with my 2GB card :stuck_out_tongue:

Hi @krishna511,

You can try changing image size, batch size or even the model.

I suggest you to try Google Colab (which is free) to train your model: with only 2 GB is very very challenging.