Using loss from previous epoch to train current epoch

I’m guessing using

optimizer.zero_grad()

current_error = nn.CrossEntropy (y_pred,y_true)
total_error = current_error + previous_error

total_error.backward()
optimizer.step()

doesn’t work.

Have you tried this to see if it works?