Cuda out of memory with enough Cuda memory

Dear all,
Currently, I have training the GRU network for speech recognition using pytorch. The training is successfully finished. However, the evaluation time, the Cuda out of memory runtime error occurs.
The actual error is RuntimeError: CUDA out of memory. Tried to allocate 1.72 GiB (GPU 0; 11.92 GiB total capacity; 5.72 GiB already allocated; 1.65 GiB free; 4.04 GiB cached)

My Cuda has around (4.04GB + 1.65GB ) unused memory but Cuda is unable to allocate 1.72GB memory, which is unreasonable.
Please suggest possible solutions to overcome this error.

With best regards,

Hello there,

There are a few things that you can do to lower your memory footprint. First is to run your validation code with torch.no_grad() so to not save any gradients. Are you by any chance running your validation code inside your training loop? If so, there might be a few tensors that you could delete from the training loop, perhaps del training_input, del training_output, del .... Lastly, I found that putting this once before training lowers my memory footprint but I don’t know it’s inner workings.
torch.backends.cudnn.benchmark = True # Optimizes cudnn

I don’t know about the cache though, good luck :slight_smile:

2 Likes

Thank you very much Oli.
I already done the first two memory footprint methods, but i did not used the third one i will try and just come up with the result.