RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling backward the first time

I get this error when training on a CPU machine but not on GPU machine i.e google colab though I am computing two losses.

if phase == 'train': loss1.backward() loss2.backward() optimizer.step()

Which PyTorch version are you using? If not the latest one, could you update and rerun the code again?
If one backend detects this error, it should be raised by all backends, so it might be an internal bug.

1 Like

My Pytorch version is 1.7.0+cpu. I am computing two losses and the error message when training. However, once I pass the retain_graph=True in the loss1, the training is completed successfully.

Since this error is only raised on the CPU, while the GPU run seems to be working, I would still consider it a potential issue.
Could you post an executable code snippet to reproduce this issue, please?