Pytorch 1.4 works, but Pytorch 1.5.X gives ‘RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation’ error

Hi there,

I got a weird issue. When using Pytorch 1.4 my code runs just fine.
But when using 1.5.1 I am getting the RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation’.

I am using CrossEntropyLoss.

Cheers,
Tassilo

This error might be raised due to the fixed inplace checked in optimizers.
From the 1.5.0 release notes:

torch.optim optimizers changed to fix in-place checks for the changes made by the optimizer (#33640, #34211)

If this causes your code to fail, there are two possible reasons:
Reason 1: The value of that parameter was actually saved and used and we were computing incorrect gradients in previous versions of PyTorch. This would result in an error message mentioning incorrect version numbers. You should replace code that uses self.my_param by self.my_param.clone() to make sure the saved version is different from the one that is modified by the optimizer.
Reason 2: You know what you’re doing and change the values back to the right thing before the next backward. However, you’re running into an error because the version counter cannot be decremented. Open an issue with your particular use case and we will help you to work around the version counter issue.

Could you take a look at the examples and check, if it would fit your use case?