I have a big trouble ,it is about graph freed

I writed a net about seq2seq .It have two class, one is encoder,the other is decoder
The loss backpropagation Two times , the net is ok, but third time,the loss backpropagation occur a error

This is the error:
RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.

This is my code:
for epoch in range(20):
for k in range(6250):

    if (k+1)%5==0:
        en_input=to_cuda(total_in[:,k-4:k+1,:])
        target_out=to_cuda(total_out[:,k-4:k+1,:])
        
        #==========encoder============================
        output,hidden_out=encoder(en_input) #### this is my encoder
        de_input=output[:,-1,:]
        
        #==========decoder============================    
        for i in range(5):
          
            de_input=to_cuda((de_input.view(3,1,25)))
        
            output,hidden_out=decoder(de_input)#####  This is my decoder
            
            out=torch.cat((out,output),1)
            
        out=out[:,1:6,:]
        
        #==========loss backward======================
        loss=criterion(out,target_out)
        optimizer_en.zero_grad()
        optimizer_de.zero_grad()
        print(loss)    ##  first time:loss 0.4317    second time loss 0.4834
        loss.backward()
        optimizer_en.step()
        optimizer_de.step()

What is out in your code before? It seems that it is not reseted between iterations.

Thers is no out. what dou you mean that reseted between iterations??

The python variable called out, you always add elements to it, but never reset it to contain nothing.

If I write this code ,it will occur that error
optimizer.zero_grad()
loss.backward()
loss.backward()
optimizer.step()
I think ,it use the last time loss to backpropagation ,so this state happen.but I writed the code is this:
loss=criterion(out,target_out)
optimizer_en.zero_grad()
optimizer_de.zero_grad()
print(‘loss %.4f’%(loss))
loss.backward()
optimizer_en.step()
optimizer_de.step()

This error will occur if any part of your computational graph is shared with the previous iteration.
For example here, if out is not resetted between iterations, you will backpropagate twice in this part of the graph.

oh I know you said, thank you your help ,but how to solve this problem

Don’t do that :slight_smile:
You should not share part of the graph like that in general.
If in your particular case, you actually want to do it, use the retain_graph=True flag when you call backward.

thank you ,but I found that is if I use the flag retain_graph=True,the computational time of every step will grow follow the step

oh thank you for you help. I foun the bug in my code . The bug is a for cycle: for i in range(5):