Problem with grad on second iteration

Hi!
I’m trying to implement first a logistic regression but from scratch using autograd engine, however i have problems during backprop

I’ve done the following:

W = torch.rand(n, n_c, dtype=torch.double)
b = torch.zeros(n_c, dtype=torch.double)
W.requires_grad_(True)
b.requires_grad_(True)

for i in range(100):
    
    Z = torch.from_numpy(X)@W + b
    L = F.cross_entropy(Z, torch.from_numpy(Y))
    
    L.backward()
    
    W = W - 0.001*W.grad
    b = b - 0.001*b.grad

On second iteration W.grad take value of None thus i get an exception trying to update parameters, i looked for a way to reset my grads but didn’t find something useful, only for optimizers, but i’m trying not to use them by now as i’m trying to make some tests.

Even tried “retain_graph=True” but the problem still happens.

Please, some help would be great!

You should use in-place operation on leaf node of computation graph(W, b are leaf variable)

    W.sub_(0.001*W.grad)
    b.sub_(0.001*b.grad)

And backward() accumulates gradients in the leaves - you might need to zero them before calling it.

1 Like