How to traverse a network

I would like to know how to traverse a network without any prior knowledge of the network, such as the naming convention. For example, if I have a layer (or module) in a network, how can I know what are the input layers and the output layer? Another example is that if I have a tensor, how can I know what layer (module) generated it?

1 Like

you question is vague.
If you want to know about a network you must print your model after loading it.

vgg_model = torchvision.models.vgg16()

the print command shows all layers in network.

Regarding input layer and output layer, you must know that the input data is considered as input layer and the last layer is considered as output layer. But, convolution networks own two sections: feature map and classification
usually in feature map section, some layers like conv2d, maxpooling are used. Also, in classification layer fully connected layers (MLP) are used; however, in some works other methods of classifications like SVM is used.

No layer generates tensor. Before feeding input data to a network you must use transformation. In the line where you define transformation, the function for converting input data to tensor is added.

I hope I could see what was your mean and answered correctly :blush:

Thanks for your reply!

What I want to do is actually very simple. Let’s use AlexNet as an example.
Assume I have an AlexNet and I have no idea what is the architecture and the naming convention of that network. Moreover, I have the final logits given by
logits = model_alexnet(input_var).

I would like to trace back from logits and get the following layers in order: FC8 (nn.Conv2d) -> ReLU_FC7 (nn.ReLU) -> FC7 (nn.Conv2d) -> ReLU_FC6 (nn.ReLU) -> … -> ReLU_CONV1 (nn.ReLU) -> CONV1 (nn.Conv2d). Moreover, I need to know what are the weights and biases in each layer and what are the activations of the input and output feature maps of each layer.

print is not what I want because I would like to modify the parameters of a layer based on the parameters of its near-by layers.

I know I can use state_dict() to extract all the parameters, but it does not tell what is the connection between each layer. Especially, Pytorch is a flexible library and we have different ways to define a network. For example, we can put all the layers in a single nn.Sequential. We can also put all the convolutional layers into a nn.Sequential, put all the fully-connected layers into another nn.Sequential and put the two nn.Sequential into another nn.Sequential. I wonder whether there is a universal way I can easily traverse the network and get all the feature maps and parameters.


1 Like

Actually, I don’t know how to access weights in each layer. But there is my github web page that I think It can be helpful for you. Accessing parameters and layers in sequential is easy. Moreover, using the follow command you can easily get every layer (layers first to forth).


I hope this page can be helpful

I need something exactly like this! I’d like to be able to achieve it without writing something like a python meta-class to wrap each of the types in and have it track the propagation of results from each layer into one another. What I just mentioned seems necessary because while you can list children, there doesn’t appear to be anything in that iterator that guarantees what each child is fed as a parameter. For example, the iterator order doesn’t guarantee that the each child is feeding the next it’s result, or really any structure regarding the graph at all. Listing the children is a lot like enumerating the nodes of a graph - there are no edges!!

How to get “edges” of the graph?

Ok - I think I have resolved this, and it has been resolved before:

Take a look at this question on how to create a graph of variable assignment. The point here is, because pytorch uses dynamic computation graphs, and the specification of either how to construct this graph or to compute with it is in python code only, you need to recover the edges from the code. That’s what the trace is for - then you can combine this with the pickled information (nodes) to reconstitute the graph.

I have faced this problem. Especially when the model is non-sequential, there was no way to access the incoming edges to a particular layer (node) in the graph directly.

However, with Pytorch1.0.1 upgrade, you can traverse the network graph using JIT trace. Here is the code snippet for AlexNet model:

import torch
from torchvision import models

def get_seq_exec_list(model):
    DUMMY_INPUT = torch.randn(1,3,224,224)
    traced = torch.jit.trace(model, (DUMMY_INPUT,), check_trace=False)
    seq_exec_list = traced.code
    seq_exec_list = seq_exec_list.split('\n')
    for idx, item in enumerate(seq_exec_list):
        print("[{}]: {}".format(idx, item))

x = models.alexnet()

How to get layer parameters?
Simply parse the string and call eval() on the parameter you want to retrieve.

You can get and set layers just calling as follows:

print(model) # will print the network architecture:
e.g. model.fc -> fully connected last layer, model.bn1 -> 1st batch_norm later, model.conv1 -> 1st conv layer, etc.