The principle of autograd

a = torch.tensor([1,2,3.], requires_grad = True)
out = a.sigmoid()
c = out.data
c.fill_(6)
out.sum().backward()
a.grad
>>> a.grad  # The result is very, very wrong because `out` changed!
tensor([-30., -30., -30.])

I know the result of a.grad is wrong. but now, I want to know how to get the tensor ([-30,-30,30]). I want to how to calculate the result to better understand the principle of autograd.

Hi,

The thing is that the change you make to c makes no sense. Dependending on the implementation of the function just before, it can change things or nothing.
So I have no idea how you can get these number.
And even if you find a reason for it in this particular case, it means absolutely nothing about how autograd works. It will just give you information about implementation details of sigmoid and sum.

1 Like