Why did pytorch run through the code before, when I run it again this time, I reminded: CUDA out of memory. Tried to allocate 50.00 MiB (GPU 0; 6.00 GiB total capacity; 353.72 MiB already allocated; 3.86 GiB free; 396.00 MiB reserved in total by PyTorch)

Hi hope you are well.
I’m also getting a similar kind of error.
First I finetuned Yolov5 on custom data.
After obtaining best I tried to refine model on those best weights but with different image size.
but now i can’t run model even with small batch size.
And I can’t run any other mode like detection transformer on machine
I tried to emtpy cache as suggested but all in vain
I tried to run code on different machine with same GPU RTX 3080 and and Vram 10GB


Any idea how can i fix this error.