Hi,

May be a ‘for dummies’ question.

I have seen many blogs, youtube videos on how the autograd works.

But I haven’t found what it actually means.

As far as I understand the gradient is a kind of derivative.

But what does it mean when I do the following?

**x = Variable(torch.tensor([2.]), requires_grad=True)**

**print(x)**

**print(x.grad)**

**print(x.grad_fn)**

**> tensor([2.], requires_grad=True)**

**> None**

**> None**

**y = x + torch.randn(1)**

**print(y)**

**print(y.grad)**

**y.backward()**

**print(x)**

**print(x.grad)**

**> tensor([3.5410], grad_fn=)**

**> None**

**> tensor([2.], requires_grad=True)**

**> tensor([1.])**

When y.backward() is executed once x.grad = 1

After doing it another time it becomes 2.

What do these values mean?

This does not look to me like a derivative at all.

Regards Hans