 # Dummy question: Autograd what does it mean?

(Hg ) #1

Hi,
May be a ‘for dummies’ question.
I have seen many blogs, youtube videos on how the autograd works.
But I haven’t found what it actually means.

As far as I understand the gradient is a kind of derivative.
But what does it mean when I do the following?
x = Variable(torch.tensor([2.]), requires_grad=True)
print(x)
print(x.grad)
print(x.grad_fn)
> tensor([2.], requires_grad=True)
> None
> None
y = x + torch.randn(1)
print(y)
print(y.grad)
y.backward()
print(x)
print(x.grad)

> tensor([3.5410], grad_fn=)
> None
> tensor([2.], requires_grad=True)
> tensor([1.])

When y.backward() is executed once x.grad = 1
After doing it another time it becomes 2.
What do these values mean?
This does not look to me like a derivative at all.

Regards Hans

(Juan F Montesinos) #2

have a look at this video

(Hg ) #3

Hi,
That was one of the video’s that I had already seen.
It explains how it works very well.
But does not what it means.

Why is my gradient 1 when I do a backward operation on y:
y = x + torch.randn(1
Where
x = Variable(torch.tensor([2.]), requires_grad=True)

Regards Hans-Peter

#4

Because the `.backward()` performs accumulation on each variable with `requires_grad=True`. That why we call `optimizer.zero_grad()` before each `.backward()` in training phase.

Thus, it is clearly to know, x.grad=1 after the first `.backward()` and x.grad=2 in the second time.