CUDA out of memory(specific question)

I have GTX 2080 ti 11G and I’m training EfficientDet on Window 10.
I don’t fully understand why I have only 320.00 KiB free? Is anyone can explain this issue?
I think to be working properly, Tried to allocate 16.00 MiB should be less than 320.00 KiB free.
I should know why…

RuntimeError: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 11.00 GiB total capacity; 8.87 GiB already allocated; 320.00 KiB free; 9.03 GiB reserved in total by PyTorch)

@stas Could help me with these issues?

Sometimes this has to do with memory fragmentation. Sometimes you are just out of memory. Here clearly it’s the latter since 16MB > 320KB.

Typically users are puzzled about the memory fragmentation scenario where the OOM happens trying to allocate say 16MB, while reporting 200MB free, which makes no sense immediately, until you understand that in those 200MB there is not a single contiguous chunk of free memory that is larger than 16MB.

You can find the memory fragmentation discussed in multiple threads here on this forum, e.g. here is one of them where I was trying to figure it out: