Error running multiple models in Torch 1.7.1 but works in Torch 1.0

Yes, the inplace updates of parameters are raising an error now, if you are using stale gradients as described in the 1.5 release notes (described in the torch.optim optimizers changed to fix in-place checks for the changes made by the optimizer section).

The reason is that the gradient computation would be incorrect. In your example you would calculate loss1 and loss2 using the model parameters in the initial state s0. loss1.backward() calculates the gradients and opt1.step() updates the parameters to state s1.
loss2.backward() was computed using the model in state s0 and would thus calculate the gradients of loss2 w.r.t. parameters s0, while the model is already updated to s1. These gradients would thus be wrong and the error is raised.

2 Likes