Dear all.

I found this issue today and I am not sure if it is something expected or not. I am trying to code a linear regression with PyTorch.

```
for epoch in range(n_epochs):
# compute prediction, compute loss, loss.backward()
with torch.no_grad():
# option 1 (runtime error):
# a = a - lr* a.grad
# b = b - lr* b.grad
# option 2
a -= lr* a.grad
b -= lr* b.grad
```

Both options are syntactically identical but the first one returns a runtime error

`RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn`

` `

Can someone point me to the reason why PyTorch treats the shortened subtraction differently from the expanded version?