In the first case, the operation is performed in-place, so the python object is still the same, while in the second one you create a new object.
To give an example:
a = torch.rand(3)
b = a # same python object
b = b - 1
print(id(b)) # object reference changed!
a -= 1 # in-place operation doesn't change object
print(id(a)) # still the same object
now this is just me being curious, is the fact that x=x+x re-assigns namespace ids vs x+=x does inplace, a feature of python or a feature of pytorch? Like could x=x+x been made equivalent to x+=x if the developers of pytorch wanted? Just curious.
The fact that it works fine is a feature (as mentioned by @apaszke in slack), but there are reasons why it wouldn’t necessarily be the case. W is a Variable that holds a tensor in W.data. Now, what happens if you change the tensor that W originally points to, by doing W.data = new_tensor? W should now point to new_tensor, but W is a Variable that was supposed to represent the original tensor present.