I’m trying to understand how `autograd`

works. Suppose I have the following code:

```
x = Variable(torch.arange(0, 4).double(), requires_grad=True)
y = Variable(x * 2, requires_grad=True)
z = y * x
z.backward(torch.ones(y.size()).double())
y.sum().backward()
print(x.grad, 2*x)
```

It seems to be working correctly. The problem is that if I change the 3rd line to `z = Variable(y*x, requires_grad=True)`

, then it does not work anymore. What’s the difference? Thank you.