Not able to understand autograd

I was reading the official tutorial on autograd (link : http://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html) in this once the out.backward() is performed and I tried printing values of z.grad and y.grad I got a None, shouldn’t z.grad have a not None value and the same for y.grad?

For easy reference, here is the code.

x = Variable(torch.ones(2, 2), requires_grad=True)
y = x + 2
z = y * y * 3
out = z.mean()
out.backward()

PyTorch does not expose the gradients of y and z because they are not “leaf” Variables.

x is a “leaf” Variable because you created it using Variable(…).
y, z and out are not a “leaf” Variables because the are the results of calculations.

1 Like

By “does not expose” does it mean that PyTorch does not allow us to print it by print(x.grad)? But I hope it does get calculated right? and the gradients are back-propagated?

y.grad gets calculated but once it has been used it is discarded to save memory. How else would backprop work?

Others have asked how to get the gradient with respect to y. How to compute the gradients of non leaf variables in PyTorch

1 Like

Okay got the point, so only the leaf variables grad is stored and available for display? and the grad for the non leaf variables is only calculated when needed and then discarded?

1 Like

Thanks for the help @jpeg729

1 Like