Releasing GPU memory after deleting model

I’m experiencing some trouble with the GPU memory not being released after deleting a model.

The pseudo-code looks something like this:

for _ in range(5):
    data = get_data()
    model = MyModule() ### PyTorch model
    results = model(data)
    del model
    torch.cuda.empty_cache()

The model occupies around 4 GB of GPU memory and in the second iteration this code crashes on my 8GB GPU with the error

RuntimeError: CUDA out of memory. Tried to allocate 3.73 GiB (GPU 0; 7.93 GiB total capacity; 3.73 GiB already allocated; 3.54

I’m probably misunderstanding something here, but I thought the del operation together with empty_cache() would free up the memory.

Thanks for any input. :slight_smile:

1 Like

Hi,

The thing is that results most likely requires gradients. And these gradients would require most of the model parameters to be computed. And so the model cannot be freed yet :slight_smile:

1 Like

:man_facepalming: That was indeed it.
Thanks for your prompt reply! :slight_smile: