Why cant I see .grad of an intermediate variable?

Hi Kalamaya,

By default, gradients are only retained for leaf variables. non-leaf variables’ gradients are not retained to be inspected later. This was done by design, to save memory.

However, you can inspect and extract the gradients of the intermediate variables via hooks.
You can register a function on a Variable that will be called when the backward of the variable is being processed.

More documentation on hooks is here: http://pytorch.org/docs/autograd.html#torch.autograd.Variable.register_hook

Here’s an example of calling the print function on the variable yy to print out it’s gradient (you can also define your own function that copies the gradient over else-where or modifies the gradient, for example.

from __future__ import print_function
from torch.autograd import Variable
import torch

xx = Variable(torch.randn(1,1), requires_grad = True)
yy = 3*xx
zz = yy**2

yy.register_hook(print)
zz.backward()

Output:

Variable containing:
-3.2480
[torch.FloatTensor of size 1x1]
42 Likes