How to access/save intermediate nodes during backprop

Hi, is there a way to access the intermediate nodes that were used after calling backward() with create_graph = True?
The only reference seems from Google search seems to be https://github.com/szagoruyko/functional-zoo/blob/master/visualize.py, but I can’t really make sense of this.

In this example, I am trying to get the Hessian, but unfortunately the Hessian that is calculated during the second backward() call gets absorbed into x.grad.

x = Variable(torch.randn(2, 2), requires_grad=True)
loss = (x ** 2).sum()
#backprop once to get grads
loss.backward(Variable(torch.ones(1), requires_grad=True), create_graph=True)
#backprop second time to get Hessian
x.grad.backward(Variable(torch.ones(x.grad.size())), create_graph = True)

If you don’t want the gradients to accumulate, then I’d recommend using the functional interface (torch.autograd.grad). Use this, compute the real loss you want to optimize based on the gradients, and run backward from this. There’s no other way to avoid the gradients from getting mixed up.

Thanks! Unfortunately, torch.autograd.grad doesn’t work here because x.grad is not a scalar…

Is there another way? Seems like it should be possible because it seems like the Hessian is being calculated as an intermediate step…