CUDA out of memory - VGG16

I have been looking for an answer on how to load a VGG16 model on a 12 GB GPU and not getting a CUDA out of memory error, but I could not find it yet. The sizes may differ with different architectures of VGG, but I get this error message:

RuntimeError: CUDA out of memory. Tried to allocate 5.49 GiB (GPU 0; 10.92 GiB total capacity; 6.27 GiB already allocated; 4.09 GiB free; 20.64 MiB cached)

I have tried parallelizing the model by increasing the GPU count, but I think we are not able to do that. Is there any way to implement a VGG16 model with 12 GB GPUs? Any help would be appreciated. Thanks.

you could reduce the input size?

You can solve that problem by reducing the batch size or/and image dimensions

1 Like

I had the same problem. The main reason is that you try to load all your data into gpu.
A possible solution is to reduce the batch size and load into gpu only few data per time and finally after your computation to send from gpu to cpu your data .cpu(). (but it depends… i don’t know your code).
However, you should pay attention if your GPU is free (because is possible it is busy by another process). Let me know.

I had the same problem before. After every batch in the training and testing adding

torch.cuda.empty_cache()

worked for me. However it is always better to watch the gpu stats. You can use gpustat. For example if you run the code and dont restart the kernel for the next run, some data from the previous run stuck in GPU memory.

First you have to calculate it’s model space(sum of input size, forward and backward size, params size ) The you have to change the input size or batch size to see if your model space fits your gpu or not(<12 gb for your case). If it fits then it’s good to go. You can use torchsummary package to calculate the model size.