How to get the gradients for both the input and intermediate variables?

Thanks, @ptrblck.

However, it seems y.grad is not tracked using y = x * x:

x = torch.tensor(0.3, requires_grad=True)
print(x)
# [output] tensor(0.3000, requires_grad=True)

y = x * x
print(y)
# [output] tensor(0.0900, grad_fn=<MulBackward0>)

z = 2 * y
print(z)
# [output] tensor(0.1800, grad_fn=<MulBackward0>)

z.backward()

print(y.grad)
# [output] None

print(x.grad)
# [output] tensor(1.2000)

Anyway, I found this post that’s relevant to my question, and I’ll digest it first.