How to access intermediate layer in a pretrained model and sequence layers?

I have a model code that defined as (model_baseline)

        modules = []
        for block in blocks:
            for bottleneck in block:
                modules.append(
                    unit_module(bottleneck.in_channel,
                                bottleneck.depth,
                                bottleneck.stride))
        self.body = Sequential(*modules)

        self._initialize_weights()

    def forward(self, x):
        x = self.input_layer(x)
        x = self.body(x)
        x = self.output_layer(x)

        return x

Link: https://github.com/ZhaoJ9014/face.evoLVe.PyTorch/blob/d5e31893f7e30c0f82262e701463fd83d9725381/backbone/model_irse.py#L156

where blocks is

blocks = [
            get_block(in_channel=64, depth=64, num_units=3),
            get_block(in_channel=64, depth=128, num_units=13),
            get_block(in_channel=128, depth=256, num_units=30),
            get_block(in_channel=256, depth=512, num_units=3)
        ]

I want to access the output of block 0 in the body during training (get_block(in_channel=64, depth=64, num_units=13),) . How can I do it in pytorch? Note that, the network is loaded from a pretrained model

This is what I tried

        modules_0 = []
        modules = []
        for block in blocks[0:1]:
            for bottleneck in block:
                modules_0.append(
                    unit_module(bottleneck.in_channel,
                                bottleneck.depth,
                                bottleneck.stride))
        self.body_0 = Sequential(*modules_0)

        modules = []
        for block in blocks[1:]:
            for bottleneck in block:
                modules.append(
                    unit_module(bottleneck.in_channel,
                                bottleneck.depth,
                                bottleneck.stride))
        self.body = Sequential(*modules)


        self._initialize_weights()

    def forward(self, x):
        x = self.input_layer(x)
        x = self.body_0(x)
        x = self.body(x)
        x = self.output_layer(x)

However, the above method cannot use weight from pretrained model that trained on the model_baseline architecture

You can use forward hooks to get the output of a specific module.
This post gives you an example. Let me know, if you get stuck.

1 Like

@ptrblck Great for your help. I have tried your approach but it does not work. This is my full code
https://gist.github.com/John1231983/aeba806e92ed62052e842f4de74049b1#file-model-py-L177

You are registering the hook after the forward pass was already performed.
Could you use the same approach I’ve used in the linked post or explain what exactly is not working?

@ptrblck My aim is to get attention (CAM) at a selected layer during forwarding. Is it possible to use your method in my case?

Yes, you should be able to get the intermediate activations, if you stick to my code example.
As explained before, you are registering the hook after the forward pass was already performed, so that you won’t be able to get this activation anymore.