How to (use hook to) get the output of conv layer in the middle of Bottleneck in resnet101

So I am using the resnet101, resnet101 has 4 layers and each layer has different number of Bottleneck:

resnet101 = models.resnet101(pretrained=True)
# print(resnet101)

if we look at the last Bottleneck in the last layer (layer 4) we have:

modules=list(resnet101.children())[:-2]
resnet101=nn.Sequential(*modules)
print(resnet101[7][0])

Bottleneck(
(conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace)
(downsample): Sequential(
(0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)

I wan to get the output of conv3 this bottleneck.

Can any one please help me how i can do that???

I also tried to change the Bottleneck function to have two output, one x and one after conv3 but it gives me weird errors…

Here is the code of the original resnet101 that im using…

import torch.nn as nn
import math
import torch.utils.model_zoo as model_zoo
import torch

class Bottleneck(nn.Module):
    expansion = 4

    def __init__(self, inplanes, planes, stride=1, downsample=None, dilation = 1 ):
        super(Bottleneck, self).__init__()
        self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
        self.bn1 = nn.BatchNorm2d(planes)

        self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, padding=dilation, dilation = dilation, bias=False)
        self.bn2 = nn.BatchNorm2d(planes)

        self.conv3 = nn.Conv2d(planes, planes * self.expansion, kernel_size=1, bias=False)
        self.bn3 = nn.BatchNorm2d(planes * self.expansion)

        self.relu = nn.ReLU(inplace=True)

        self.downsample = downsample
        self.stride = stride


    def forward(self, x):
        residual = x

        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)

        out = self.conv2(out)
        out = self.bn2(out)
        out = self.relu(out)

        out = self.conv3(out)
        out = self.bn3(out)

        if self.downsample is not None:
            residual = self.downsample(x)

        out += residual
        out = self.relu(out)

        return out


class ResNet(nn.Module):

    def __init__(self, block, layers):
        self.inplanes = 64
        super(ResNet, self).__init__()
        self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False)
        self.bn1 = nn.BatchNorm2d(64)
        self.relu = nn.ReLU(inplace=True)
        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)

        self.layer1 = self._make_layer(block, 64, layers[0])
        self.layer2 = self._make_layer(block, 128, layers[1], stride=1)
        self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
        self.layer4 = self._make_layer(block, 512, layers[3], stride=2, dilation = 6) # We modify the stide 2 here to be one

        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
            elif isinstance(m, nn.BatchNorm2d):
                nn.init.constant_(m.weight, 1)
                nn.init.constant_(m.bias, 0)


    def _make_layer(self, block, planes, blocks, stride=1, dilation = 1):
        downsample = None
        if stride != 1 or self.inplanes != planes * block.expansion:
            downsample = nn.Sequential(
                nn.Conv2d(self.inplanes, planes * block.expansion, kernel_size=1, stride=stride, bias=False),
                nn.BatchNorm2d(planes * block.expansion),
            )

        layers = []
        layers.append(block(self.inplanes, planes, stride, downsample,  dilation = dilation))
        self.inplanes = planes * block.expansion
        for i in range(1, blocks):
            layers.append(block(self.inplanes, planes, dilation = dilation))

        return nn.Sequential(*layers)

    def forward(self, x):
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)
        x = self.maxpool(x)


        x = self.layer1(x)

        x = self.layer2(x)

        x_1 = self.layer3(x)
        
        x_2 = self.layer4(x)
        
        return x 
       

I think register_forward_hook is good in this case.

def hook(module, input, output):
    setattr(module, "_value_hook", output)

for n, m in resnet101.named_modules():
    if n == "foo":
        m.register_forward_hook(hook)

resnet101(input)

for n, m in resnet101.named_modules():
    if n == "foo":
        in_output = m._value_hook
1 Like

I havent been able to understand hook yet :expressionless: … the lack of good explanation/example feels for that :frowning:

BTW link that you provide is going to “Set up a Docs Mirror in China” :smiley:

When i do what you mentioned i get this error for for n, m in resnet101.named_module():
that: AttributeError: 'ResNet' object has no attribute 'named_module'

Try to use .named_modules(). It’s probably a typo :wink:

You are right, it was a typo!
This hook thing works really interesting! but it is not completely clear how it works really yet for me…
what does each of this functions do here? I mean i read the register_forward_hook explanation and got it, but what this does:

def hook(module, input, output):
    setattr(module, "_value_hook", output)

is _value_hook one of the attributes of the hook function?

Also another question:
Now that i got the in_output results, i am doing some other stuff on it and use it in computing my loss, will back propagation still works as before or i should do something extra?

@isalirezag @ptrblck I fixed the typo and the wrong link.

If I understand correctly, register_forward_hook inserts a function (hook, in this case) into the end of the forward method of the specified module.

_value_hook can be any names you like except ones already used in the module.

@moskomule @ptrblck
Thanks for your answer.

The bummer is that this hook thing is extremely slow :confused: like at least 2-3 times slower than just changing the main function :frowning:

Hello,
I stuck in similar problem. Can I use Resnet101 as a encoder only? I would like to create my own decoder for different applications. Thanks