How to refer to the layer def with the grad_fn given?

When debugging the net backward process, how can one refer to the pre-defined layer info with gran_fn given?

Hi,

Sorry could you be clearer in your question?
What do you call “the pre-defined layer”?
What do you mean by “with grad_fn given”?

Sorry I didn’t make it clear.
What I want ask is when backtracing the computational graph by starting at the network output.grad_fn, how can I know the backward function is corresponding to which layer module?

For instance,

layer1 = nn.Conv2d(param1)
layer2 = nn.Conv2d(param2)
net = nn.Sequential([layer1, layer2])

output = net(input)

layer_grad_fn = output.grad_fn.next_functions[0][0]

layer_grad_fn is the type of ThnnConv2DBackward. How can I know that layer_grad_fn is corresponding to which layer, layer1 or layer2? Which attribute binds them together?

In this example layer_grad_fn maybe corresponding to layer2, because backward functions connected in reverse order and the network is simple. But it’s hard to count by hand in large networks.

Hi,

At the moment nothing links them together.
The computational graph is an autograd construct which does not know about nn.Modules.
It is quite hard to link the two of them unfortunately.

Oh thank you for pointing out!

But the autograd construct is generated from the modules. If I can’t trace back from grad, can I know which grad_fn is generated by specific layer module? Where to know that, in forward()? The corresponding relations between module and grad_fn bring me hard to debugging the network, plz help T_T A lot of thanks again!

In the Module’s forward pass, you know which module you’re in, but this information is not available afterwards because the graph is saved independently of the Module iteself. So you can’t trace back from the grad_fn which module created it.

But in old version of pytorch, the grad_fn nearly made a full copy of module layer information, I can still trace the relations. In new version I can’t find the information in grad_fn anymore T_T. Are these information moved into private variables?

Another potential way is by adding hooks into each module layer, will that be working?

Older version of pytorch? grad_fn has always been independent of nn.Modules.

What information are you looking for exactly?

Take the former example

layer1 = nn.Conv2d(param1)
layer2 = nn.Conv2d(param2)
net = nn.Sequential([layer1, layer2])

output = net(input)

layer_grad_fn = output.grad_fn.next_functions[0][0]

Here In the graph I need to know the number of input and output channels of layer_grad_fn, which is a ThnnConv2DBackward. And sometimes the conv kernel size as well.

Is there any solution?

I am afraid it is not that easy to do.
The simplest way I see is to use:
layer_grad_fn.next_functions[1][0].variable that is the weights of the conv and layer_grad_fn.next_functions[2][0].variable that is the bias of the conv. By looking at the size of the weight matrix, you can know the size of the kernel as the weight matrix is: out_chan x in_chan x kern x kern.

Yeah that don’t seems like a perfect solution, and with the in_chan, out_chan unknown as well.

However the in_chan, out_chan of grad_fn must can be read in backend level to enable performing forward.

Anyway thank you alban! Really nice of you to answer patiently!