I am monitoring using nvidia-smi, but the memory usage never reaches close to maximum.
In some other post, I saw that someone is calculating free memory as:
free memory = reserved memory - allocated memory
In this case, it makes sense.
But then I’m not sure how to reserve more memory, since it looks like it’s done automatically.
I’m also getting a similar type of error.
I have finetuned Yolov5 on custom dataset, it went very well, after getting best weights. I tried to refine those best weights by training for more 30 epochs. But I get cuda out of memory error.
It says you have only 252.69MiB Cuda memory and you are trying to allocate 250.00MiB, which is pretty close. This makes sense to me that you don’t have enough GPU memory available.
In my case it tries to allocate 3.62GiB but there are 20.41GiB free GPU memory, which I think is weird.