Why are "x+=x" and "x=x+x" are handled differently?

I found that when “requires_grad=True” then x+=x does not work for backprop in loss.backward() it .
I noticed this because in MyModel when I did x+=x in forward() then it gave autograd error for AddBackward() variable in-place operation.


but when, x+=x

I know x+=1 and x = x + 1 are not the same. but can’t it be handled in the same way in Torch, why the error?

Hi Bibhabasu!

This is (I presume) by design. According to standard python syntax, when
you execute x = x + 1, python evaluates the right-hand side, and, in the
case of pytorch, causes a new tensor to be create. Then the python “variable”
x is set to refer to that new tensor.

On the other hand, the pytorch designers chose (logically, in my mind) to
implement x += 1 as in-place tensor addition. So this version does not
create a new tensor, but, instead, modifies the existing x tensor in place
(hence the error you get).

Why do things this way? First, I think having x += 1 be implemented as
in-place addition is what I would “expect,” based on normal python syntax.
Also, this way you have a convenient choice between the in-place and
out-of-place operation.

Best.

K. Frank

Please see here:

https://pytorch.org/tutorials/beginner/introyt/autogradyt_tutorial.html#autograd-and-in-place-operations

1 Like