CUDA error: out of memory,why?

I’m testing a object tracker,during running the code,I found that when test the fifth vedio in the dataset,there was an error:CUDA error: out of memory.I can’t know why.
Could you please tell me some methoda to solve this problem?

Hi,

This can happen for several reasons. If it happens after a few iteration, this is most likely that you are increasing your GPU memory needs from one iteration to the other, for example if you append to a list whole frames of the video at every iteration of the loop.
Can you give more details on what you’re doing?

Thanks a lot for your attention.
I use VOT2016 dataset which has many videos as subdatasets to test the tracker finetuning SiamRPN that I add a new feature extraction part to it.And this part will store each frame information just like LSTM.To testify my code of this new part is correct,I just test the whole code without train.Although it works every time,but when the tracker read-in the fifth vedio,there will an error:CUDA error: out of memory.
I don’t know whether it is because the the code I have added.Could you please tell me how to deal with this?Thanks a lot.

Hi,

You can easily check the previous code by running it I guess.
For the new code, you want to make sure you don’t store too much Tensors that would fill up your memory.
Also be careful to use with torch.no_grad(): when evaluating your model to reduce memory usage.

That’s cool.
My problem has been solved.
Thank you very much.

That’s cool.
My problem has been solved.
Thank you very much.

how you solved the problem