# Problem with grad on second iteration

Hi!
I’m trying to implement first a logistic regression but from scratch using autograd engine, however i have problems during backprop

I’ve done the following:

``````W = torch.rand(n, n_c, dtype=torch.double)
b = torch.zeros(n_c, dtype=torch.double)

for i in range(100):

Z = torch.from_numpy(X)@W + b
L = F.cross_entropy(Z, torch.from_numpy(Y))

L.backward()

``````

On second iteration W.grad take value of None thus i get an exception trying to update parameters, i looked for a way to reset my grads but didn’t find something useful, only for optimizers, but i’m trying not to use them by now as i’m trying to make some tests.

Even tried “retain_graph=True” but the problem still happens.

Please, some help would be great!

You should use in-place operation on leaf node of computation graph(W, b are leaf variable)

``````    W.sub_(0.001*W.grad)