Hi there, I got this runtime error when I was running my code on CUDA,
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 4.00 GiB total capacity; 33.22 MiB already allocated; 2.65 MiB free; 40.00 MiB reserved in total by PyTorch)
I notice that the memory reserved by PyTorch is extremely small, I’m using GTX 1050Ti with torch version 1.4.0, driver version 457.09 and CUDA version 11.1
Is this an issue with my CUDA settings? Does anyone know how I can fix this?
Cheers
Hi, thanks for your reply, since I am using Windows I can’t monitor gpu memory via nvidia-smi, I googled but can’t find a replacement.
I do notice that the dedicated gpu memory usage was empty at the beginning, but increased to 3.7GB when I run my code, though according to the error message, only 268 MiB memory was reserved by pytorch.
Hi @ptrblck, thanks for your help, I executed nvidia-smi on windows but I only got N/A for each process’ gpu usage, however, I do find the cause to my problem.
Since I load data from tfrecord file, I import tensorflow to do data preprocessing, and tf takes up all the gpu memory by default. I flush CUDA after the preprocessing and everything works fine now!