# Grad is None! but can't understand why!?

Hi,
I’ve started to learn PyTorch and I’m loving it.
I’ve learned a few things and tried to implement logistic regression from scratch,
but challenging myself not to use `nn` Module and any optimizer or loss function and write these myself.

So I’ve written the code, but I am having a problem which I’ve stuck at for quite some time now.
I really would appreciate if you show me why this does not work and how to avoid these situations as it seems to me `autograd` can be confusing.

``````def get_one_sample(idx):
return dataset.iloc[idx, :-1].values, dataset.iloc[idx, -1]

device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")

w = torch.randn((d, 1), dtype=torch.float64, device=device, requires_grad=True)
b = torch.randn((1, 1), dtype=torch.float64, device=device, requires_grad=True)

losses = []
for epoch in range(epochs):
for i in range(m):
x_np, y_np = get_one_sample(i)
x = torch.from_numpy(x_np).to(device).view(-1, 1)
y = torch.tensor([y_np], dtype=torch.float, device=device)

# forward computation
y_hat = torch.sigmoid(w.T@x+b)

# loss computation & loss backprop

loss = torch.log(-y_hat) if y_np==1 else torch.log(1-y_hat)

losses.append(loss.item())

loss.backward()

# updating weights
w = w - alpha * w.grad
b = b - alpha * b.grad

print("epoch:", epoch)

plt.plot(losses)
plt.show();

``````

the console output before crashing into error:

``````tensor([[-4.1593e-07],
[-9.9529e-07],
[ 3.2241e-07],
[ 5.1336e-08]], device='cuda:0', dtype=torch.float64)
None
None
``````

the runtime stops at `w.grad.data.zero_()` with the following error:

``````----------------------------------------------------------------------
AttributeError                       Traceback (most recent call last)
<ipython-input-79-eb9f256e22e0> in <module>
33
36

AttributeError: 'NoneType' object has no attribute 'data'
``````

also, I can’t quite figure out why it is outputting 3 lines or print where as I obviously have only two!

I fixed it myself.

when updating the weights, I have to put them in `with torch.no_grad():`
as apparently not doing so makes the computational graph acyclic. (am I write?)
Also, it is correct to use `w.grad.zero_()` not `w.grad.data.zero()`, which I can’t understand why yet!

So, the corrected part is as follows:

``````        loss.backward()

# updating weights