Memory leak in for loop

I have a forward method as the following code (simplified):

def forward(self, Y, D):
    L0_1=self.relu(self.low_level(D))
    cas_input=self.relu(self.conv_genesis(L0_1))
    for i in range(self.N+1):
        L_1=self.relu(self.D_convs[i](cas_input))     #leak1
        L_2=torch.cat([L_1,Y_features[i]],1)             #leak2
        cas_input=self.relu(self.D_fusions[i](L_2))   #leak3
    output=self.reconstruct(cas_input)+D
    return output

I check with nvidia-smi and found memory leak in every round in “for” loop.

Are you sure it’s a memory leak?
It seems you are passing cas_input in a “recurrent” way into your modules, such that its computation graph will be stored. If you don’t want to calculate the gradients for cas_inputs, you could call .detach() on it before passing it to self.D_convs[i].