How to force to immediately free consumed gpu memory within forward method when going out of forward

I have a forward method looks like this:

def forward(self,x)
     x=self.out_1(x)
     x=self.out_2(x)
     x=self.out_3(x)
     return x

when debugging,navidia-smi tells that each line will consume x MBs gpu memory,and these memory won’t get freed went out of forward method,then training crashed quickly,and I am sure that the out_1,2,3… themselves won’t cause memory leaking, so questions are:
1,is it because of pytorch keep compute graph automatically cause this problem?
2,if it is,any approach to free these memory without waiting for backward()?
3,if it isn’t,what possible reasons will cause this problem?

Hi,

  • Yes a lot of memory is kept until the backward call because its content is needed for the backward call computations.
  • You can use the checkpointing tool to trade compute for memory: this will slow down the backward but reduce the memory usage.