Accessing intermediate layers of a pretrained network forward?

Hi, I want to get outputs from multiple layers of a pretrained VGG-19 network. I have already done that with this approach, that I found on this board:

class AlexNetConv4(nn.Module):
            def __init__(self):
                super(AlexNetConv4, self).__init__()
                self.features = nn.Sequential(
                    # stop at conv4
                    *list(original_model.features.children())[:-3]
                )
            def forward(self, x):
                x = self.features(x)
                return x

Initializing a new net for every output I am interested in, occupies a lot space on my GPU, therefore I would like to follow this approach, via the forward method:

def forward(self, x):
    out1 = F.relu(self.conv1(x))
    out2 = F.relu(self.conv2(out1))
    out3 = F.relu(self.conv3(out2))
    return out1, out2, out3

The problem is, that I donā€™t know how to get the names of the convolutions in a pretrained VGG-Net that I got from the torch vision models.
Hope someone can help me out with that! :slight_smile:

3 Likes

Here is a example to get output of specified layer in vgg16

the 3rd, 8th, 15th,22nd layer is relu1_2,relu2_2,relu3_3,relu4_3.

BTW, you may use

```Python
your code
```

to format your code

itā€™ll goes like

your code
13 Likes

wow, thanks a lot for your help!

With your approach I need to do several iterations, to get all my desired outputs. I would like to speed up my training by

                     def forward(self, x):
    out1 = F.relu(self.conv1(x))
    out2 = F.relu(self.conv2(out1))
    out3 = F.relu(self.conv3(out2))
    return out1, out2, out3

I have searched a lot, but I canā€™t find a way, to get the names of the different layers, of the pretrained pytorch vgg19 model.
Thanks for your help!

1 Like

Could you make it clear which layer do you want? You donā€™t need to do extral forward.

I am training a network for the image synthesis, based on this paper https://arxiv.org/abs/1707.09405 and get my loss from a vgg 19 net. The layers I am currently interested in are 3,7,8,15,22 and 32. The ideas is, to get a more complex loss function, by comparing the detected features of a reference and a synthesised image

change vgg16->vgg 19

change it to 3 7 8 15 22

change 23 to 33

there are no extral consumptions.

1 Like

How can I access intermediate layers from resnet50?

>>> list(models.vgg16().features)
[Conv2d(3, 64, kernel_size=(3,3), ... #list all layers
>>> list(models.resnet50().features)
AttributeError: 'Resnet' object has no attribute 'features'

And thats true, it doesnā€™t have attribute ā€˜featuresā€™ unlike vgg.

Thank you.

sorry for playing the necromancer here, but I have the same issue.
I am still new to pytorch, but if I understand your proposed solution correctly, then your proposed solution here:

computes only exactly one forward pass, right?
That is, conv1_1 will be executed exactly once, and the result will be reused to compute conv1_2?
This should be fine iff I donā€™t want to compute pre-relu activations. For conv outputs before ReLU Iā€™d need to save x.copy() I guess, because ReLU is computed in-place?

EDIT:
Also, this only works with the convolutional features, right? If I wanted to extract e.g. the fc6 features of a vgg19 network (that is, the first fc layer after the conv blocks) Iā€™d have to extract all submodules from net.classifier and build my own nn.Sequential with those?

hello. what I want to do is to feed some input to alexnet and get the middle layer output of specific layer(feature maps) then save them and feed them to another CNN as training data. for example the second network is autoencoder. now my question is how to feed the middle layer outputs to this new network as training data?

Hi, you can try this pytorch utility torch-intermediate-layer-getter Ā· PyPI