'NoneType' object has no attribute 'zero_'

Hello, I got an error when writing a simple linear regression model using autograd. My model is
y = w1 * x1 + w2*x2 + b and I need to learn parameters w1, w2 and b. My code is as below:

x1 = [0.0, 2.0]
y1 = 1.0
x2 = [1.0, 3.0]
y2 = 1.0
x3 = [1.0, 0.0]
y3 = 0.0
x4 = [2.0, 1.0]
y4 = 0.0
X = np.array([x1, x2, x3, x4])
Y = np.array([y1, y2, y3, y4])

feature = torch.from_numpy(X)
label = torch.from_numpy(Y).view(-1, 1)

epochs = 100
lr = 0.01
weight = torch.randn((2, 1), requires_grad=True, dtype=torch.float64)
bias = torch.randn(1, requires_grad=True, dtype=torch.float64)

for i in range(epochs):
    predict = torch.mm(feature, weight) + bias.item()
    loss = torch.sum(predict - label, dim=0)
    loss.backward()
    weight = weight - weight.grad*lr
    bias = bias - bias*lr
    weight.grad.zero_()
    bias.grad.zero_()

Error occurs at line weight.grad.zero_() :

AttributeError: 'NoneType' object has no attribute 'zero_'

I want to know how to fix it? Thanks :slight_smile:

1 Like

Hi,

When you do weight = weight - weight.grad*lr, weight now points to a brand new Tensor and so the gradient informations from the original weight Tensor are gone. You can check that after this line, weight.grad is None.

The other problem you’re going to encounter is that weight = weight - XXX, this will be tracked by the autgrad which you most likely don’t want.

To fix these, you can

  • Change weight inplace, to avoid the first problem above
  • Disable autograd while you update your weights to avoid the second one.

Here is the new code update:

for i in range(epochs):
    predict = torch.mm(feature, weight) + bias.item()
    loss = torch.sum(predict - label, dim=0)
    loss.backward()
    # Disable the autograd
    with torch.no_grad():
        # Inplace changes
        weight.sub_(weight.grad*lr)
        bias.sub_(bias.grad*lr) # A .grad is missing in your code here I think ;)
        # Do the reset in no grad mode as well in case you do second order
        # derivatives later (meaning that weight.grad will requires_grad)
        weight.grad.zero_()
        bias.grad.zero_()
3 Likes

Thank you! But I still get the same error following your code:

Traceback (most recent call last):
  File "/Users/audrey/PycharmProjects/PyTorchLearning/main.py", line 31, in <module>
    bias.grad.data.zero_()
AttributeError: 'NoneType' object has no attribute 'data'

Then I changed bias.item() into bias

predict = torch.mm(feature, weight) + bias

No error appeared anymore! I wonder why bias.item() doesn’t work.
Anyway, my code can run successfully, thanks!

I think I find the reason why bias.item() doesn’t work. Using scaler makes tensor bias not be tracked.

Exactely: .item() gives you a python number that means that you don’t have Tensors anymore (and so no autograd anymore).

Also given you error message, you seem to be using .data. You should not use it and using with torch.no_grad() instead as in my example.

1 Like

I tried to use with torch.no_grad() to resolve this error

def weights_init_normal(m):
    classname = m.__class__.__name__
    lr = 0.01
    if isinstance(m, nn.Conv2d) or isinstance(m, nn.Linear):
        torch.nn.init.normal_(m.weight, mean=0.0, std=0.02)
        with torch.no_grad():
            m.bias.sub_(m.bias.grad*lr)
            m.bias.grad.zero_()

got alot of error.
AttributeError: 'NoneType' object has no attribute 'sub_'

I found out my bias is NoneType.

Tried several techniques , nothing worked .

I wanted to initialize the weights of bias to zero.

Please help!!!

Hi,

By default, the .grad attribute is None. Which means “full of zeros”.

1 Like

Oh ,That means, It is ok to not to touch None type bias. However, my model was not converging, may be my error. Thanks for knowledge :slight_smile: