CUDA out of memory Problem!

Hello!
I am University Student who studying in Speech Recognition in Korea.

Recently I faced a annoying situation, and REALLY need some help.
It’s an out of memory error, and error message is like below:
“CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 12.00 GiB total capacity; 9.42 GiB already allocated; 6.67 MiB free; 9.71 GiB reserved in total by PyTorch)”

My question is, although there is total 12.00 GIB of memory capacity, why my model struggles for memory shortage, which is using only 9.42 GIB, or 9.71 GIB at maximum .

Is there a extra reserved memory space that CUDA(or Pytorch) allocates internally to deal with specific situation?

If so, is there any way to stop it?
It’s really frustrating for poor university student who can’t afford to get additional GPU, haha.

It’ll be lovely if someone’s gonna help me.
Thank you very much!

The message points to a small chunk of free memory (~6MB), which is not sufficient for your use case.
This could happen e.g. due to memory fragmentation.
Try to lower your batch size and run the code again. Alternatively you could also have a look at torch.utils.checkpoint to trade compute for memory.