In-place operation error from tutorial

According to the autograd tutorial code in pytorch with example.

with torch.no_grad():
        w1 -= learning_rate * w1.grad
        w2 -= learning_rate * w2.grad

        # Manually zero the gradients after updating weights
        w1.grad.zero_()
        w2.grad.zero_()

It returns in console with following error:

<ipython-input-67-ca9abcaafd03> in <module>()
      1 # update the weight
----> 2 w1 -= lr * w1.grad
      3 w2 -= lr * w2.grad

RuntimeError: a leaf Variable that requires grad has been used in an in-place operation.

Should it change to the latest version with in-place tensor operation?
such as:

learning_rate = 0.01
for f in net.parameters():
    f.data.sub_(f.grad.data * learning_rate)

I’d wrap the critical bits in torch.no_grad():.

Best regards

Thomas

To be clear:
the following code

with torch.no_grad():
        w1 -= learning_rate * w1.grad
        w2 -= learning_rate * w2.grad

        # Manually zero the gradients after updating weights
        w1.grad.zero_()
        w2.grad.zero_()

does run properly with

with torch.no_grad():

it is just the piece got error without the with condition.

w1 -= learning_rate * w1.grad
w2 -= learning_rate * w2.grad

it is just bit confusing when I am trying the autograd features but with this condition of require_grad=False maner.

Thanks Tom