Why my algorithm use more and more memory as time goes?

Hi, I noticed that my algorithm will occupy more and more gpu memory and as a result, reach the limit of gpu memory capacity and out of memory.

I dnt understand which common factor will lead to these. i have checked my code and seems normal. Anyone give some tips how to check this bugs?

This usual indicates that you are holding some tensors in a list (or something similar) which prevents Pytorch from freeing the memory. In the worst case you have a list of non-detached tensors which are still tracked by autograd and hold the whole gradient-path.

If you can post your code we could have a look at it.

1 Like

Hi, Could u kindly leave your email, so that I can send you my source code( 2 files.).
@justusschock