The code which i use actually is more complex. This simple one is still have the problem. The first time when i use lr_scheduler function ,my gpu memory is low. While As the iteration goes on, this code need more gpu memory. I don’t know why it would be like this, but when a del the lr_scheduler, the increasing of gpu memory disappear. Is there somebody watch this quesion? I really need help.
Thanks for the executable code snippet!
I failed to reproduce the issue with 1.4.0.dev20191109, as my memory usage stays constant.
Which PyTorch version are you using?