Buffering error

hi all
M unable to find out why i should buffer freed error with below code
usig retain_Graph = true is expensive i cant use that option.
Please suggest where to retain the previous value and use that in running calculation for next iterations.

for idx, (imgs, labels) in enumerate(tk0):
        imgs_train, labels_train = imgs.cuda(), labels.float().cuda()
        output_train = model(imgs_train)
        loss = criterion(output_train,labels_train)
        if idx>=1:
            
            loss=loss*0.3+0.7*prev_loss
        with torch.no_grad():
            
            prev_loss=loss
        with amp.scale_loss(loss, optimizer) as scaled_loss:
            scaled_loss.backward( )
       
        optimizer.step() 
        
        optimizer.zero_grad()

Since you are setting prev_loss=loss, you will backpropagate through the computation graph multiple times, so you would need to set retain_graph=True in the backward call.
Assigning prev_loss = loss in the no_grad block will not detach prev_loss from the computation graph, as just new operations won’t require gradients.