According to the autograd tutorial code in pytorch with example.
with torch.no_grad(): w1 -= learning_rate * w1.grad w2 -= learning_rate * w2.grad # Manually zero the gradients after updating weights w1.grad.zero_() w2.grad.zero_()
It returns in console with following error:
<ipython-input-67-ca9abcaafd03> in <module>() 1 # update the weight ----> 2 w1 -= lr * w1.grad 3 w2 -= lr * w2.grad RuntimeError: a leaf Variable that requires grad has been used in an in-place operation.
Should it change to the latest version with in-place tensor operation?
learning_rate = 0.01 for f in net.parameters(): f.data.sub_(f.grad.data * learning_rate)