How to get nn.Module from autograd's graph trace?

Hi,

I need to access to modules of model when I take a backward graph trace. I want to access to graph trace and nn.Module simultanious. There are any ways to match between .grad_fn and nn.Module?

For example:

  1. Import pytorch
    import torch
    from torch.autograd import Variable
    from torch import nn

  2. Create model (It can be more difficult model)
    model = nn.Conv2d(3, 10, kernel_size=1)

  3. Compute model for some input variable
    input_var = Variable(torch.randn(1, 3, 10, 10), requires_grad=True)
    output_var = model(input_var)

  4. So, I can access to output_var.grad_fn now, it is a ConvNdBackward object. And I want to cast ConvNdBackward to nn.Conv2d and get access to all members of nn.Conv2d, and I want to have access to each submodule of model. Can I do it?
    print(output_var.grad_fn)

  5. I want to do something like this:
    fn = output_var.grad_fn
    conv = fn.to_module() # (grad_fn.module() -> nn.Module)

2 Likes

Hi,

Unfortunately it is not possible right now.

Oh,

It’s not good. But, How can I get feature maps size by grad_fn (I found feature map size only for some layers such as Pooling, ReLU etc.)?

Why are you trying to explore the graph using grad_fn ? The grad_fn and the graph attached to it contains very specific (and quite limited) informations.

Because I’m not found any other ways to get graph topology.
I need a topology of graph, feature map sizes/activations/gradients for each node and some other statistics. But currently I can get topology or some statistics independent and I can’t match this information.

And one more question: how can I found code, which convert from Module to Function?
I think, that I can add to grad_fn some extra members for identificate layer…

Such a conversion does not exist.
An nn.Module is a high level structure to make it convenient to write Neural networks and different operations.
A Function is an elementary element from the autograd engine that is created when doing computations on Variable and that allows to compute the gradients.
There is no one to one matching between them.

Moreover, the Function objects comes from the C++ engine, so even though you can see them from python, you should not use them to store some data as the python object is destructed as soon as it is not needed in python anymore.

Oh,

In this case: is there any agreement (or recommendations) to how to get a full description of the network:

  1. Topology
  2. Dimensions of input/output feature maps for each layer
  3. Layers parameters
  4. Activation / gradient values

Currently I found some ways how I can get each description independent, but after that all descriptions can’t be merged.
Thanks in advance!

If you want this once, maybe exporting it with onnx will give you a format that is easier to work with.
Otherwise I don’t know how to do that.

1 Like

I tried to use ONNX, but it works for easy networks and don’t work for more difficult networks (which contain layers such as AdaptivePooling, LogSoftmax, etc.).
But anyway, thanks for your help!