Hello, I got an error when writing a simple linear regression model using autograd. My model is y = w1 * x1 + w2*x2 + b and I need to learn parameters w1, w2 and b. My code is as below:
When you do weight = weight - weight.grad*lr, weight now points to a brand new Tensor and so the gradient informations from the original weight Tensor are gone. You can check that after this line, weight.grad is None.
The other problem you’re going to encounter is that weight = weight - XXX, this will be tracked by the autgrad which you most likely don’t want.
To fix these, you can
Change weight inplace, to avoid the first problem above
Disable autograd while you update your weights to avoid the second one.
Here is the new code update:
for i in range(epochs):
predict = torch.mm(feature, weight) + bias.item()
loss = torch.sum(predict - label, dim=0)
loss.backward()
# Disable the autograd
with torch.no_grad():
# Inplace changes
weight.sub_(weight.grad*lr)
bias.sub_(bias.grad*lr) # A .grad is missing in your code here I think ;)
# Do the reset in no grad mode as well in case you do second order
# derivatives later (meaning that weight.grad will requires_grad)
weight.grad.zero_()
bias.grad.zero_()
Thank you! But I still get the same error following your code:
Traceback (most recent call last):
File "/Users/audrey/PycharmProjects/PyTorchLearning/main.py", line 31, in <module>
bias.grad.data.zero_()
AttributeError: 'NoneType' object has no attribute 'data'
Then I changed bias.item() into bias
predict = torch.mm(feature, weight) + bias
No error appeared anymore! I wonder why bias.item() doesn’t work.
Anyway, my code can run successfully, thanks!