How to access the computational graph?

I have seen thousands of people asking for this in the pytorch forum, and getting only nonsensical responses. If I have a loss function, and I call loss.backward, I want to know which tensors are going to receive gradients, etc. I want to access the computational graph.

Hi,

Which posts in particular?
There are ways to traverse the graph but since we rebuild it at every iteration (to be able to handle all the dynamic controls user wants), it is not as structured as it is in other frameworks.

But as you might have seen in other posts, you can traverse the Nodes it using .next_functions after accessing the Tensor’s parent Node via .grad_fn.
The AccumulateGrad Nodes are the ones responsible to accumulate gradients in the .grad fields on Tensors and have a .variable attribute that will tell you which Tensor they will accumulate the gradient into.

1 Like

Thanks, that’s (the “next_functions” method) is exactly what I was looking for. I noticed the next_functions method is not present in the documentation though. Why is that? next_functions returns a tuple, so it is unclear what each element of the tuple represents. Also, how can I access the AccumulateGrad nodes?

It is not present in the doc as it is more of an implementation detail than something the user should rely on. We provide it for convenience and debugging purposes.

The tuple contains the next Nodes for each input. Each element there is a pair of net Node and the index in the output of the Tensor that was used as input to this Node.
the AccumulateGrad Nodes are just one type of Node. You can check the string representation and check that it matches I guess.
To collect these, you will need to traverse the whole graph (it is guaranteed that there is no cycle there) and save all the AccumulateGrad Nodes you encounter.