Buffering error

hi all
M unable to find out why i should buffer freed error with below code
usig retain_Graph = true is expensive i cant use that option.
Please suggest where to retain the previous value and use that in running calculation for next iterations.

for idx, (imgs, labels) in enumerate(tk0):
        imgs_train, labels_train = imgs.cuda(), labels.float().cuda()
        output_train = model(imgs_train)
        loss = criterion(output_train,labels_train)
        if idx>=1:
        with torch.no_grad():
        with amp.scale_loss(loss, optimizer) as scaled_loss:
            scaled_loss.backward( )

Since you are setting prev_loss=loss, you will backpropagate through the computation graph multiple times, so you would need to set retain_graph=True in the backward call.
Assigning prev_loss = loss in the no_grad block will not detach prev_loss from the computation graph, as just new operations won’t require gradients.