I found that when “requires_grad=True” then x+=x does not work for backprop in loss.backward() it .
I noticed this because in MyModel when I did x+=x in forward() then it gave autograd error for AddBackward() variable in-place operation.
This is (I presume) by design. According to standard python syntax, when
you execute x = x + 1, python evaluates the right-hand side, and, in the
case of pytorch, causes a new tensor to be create. Then the python “variable” x is set to refer to that new tensor.
On the other hand, the pytorch designers chose (logically, in my mind) to
implement x += 1 as in-place tensor addition. So this version does not
create a new tensor, but, instead, modifies the existing x tensor in place
(hence the error you get).
Why do things this way? First, I think having x += 1 be implemented as
in-place addition is what I would “expect,” based on normal python syntax.
Also, this way you have a convenient choice between the in-place and
out-of-place operation.