I am running into this error ‘One of the variables needed for gradient computation has been modified by an inplace-operation’ when back propagating on my loss function. My loss function is a simple
def loss_fn(x): total_loss =  for i in range(k) loss = ... total_loss.append(loss) final_loss = torch.sum(torch.stack(total_loss))
I can call loss.backward() at any given iteration of the loop, but cannot call final_loss.backward() indicating the for loop is breaking it. I do not see an inplace operation, and I have tried solutions from other questions asked.