How to add an extra layer in the middle of resnet18 architecture?

Hello Guys,
I have got a simple question. Here is the architecture of resnet18.

ResNet (
    (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
    (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True)
    (relu): ReLU (inplace)
    (maxpool): MaxPool2d (size=(3, 3), stride=(2, 2), padding=(1, 1), dilation=(1, 1))
    (layer1): Sequential (
        (0): BasicBlock (
              (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
              (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True)
              (relu): ReLU (inplace)
              (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
              (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True)
        )
        (1): BasicBlock (
              (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
              (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True)
              (relu): ReLU (inplace)
              (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
              (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True)
        )
     )
     (layer2): Sequential (
        (0): BasicBlock (
             (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
             (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True)
             (relu): ReLU (inplace)
             (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
             (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True)
             (downsample): Sequential (
                  (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
                  (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True)
             )
        )
        (1): BasicBlock (
             (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
             (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True)
             (relu): ReLU (inplace)
             (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
             (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True)
       )
    )
    (layer3): Sequential (
       (0): BasicBlock (
             (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
             (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True)
             (relu): ReLU (inplace)
             (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
             (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True)
             (downsample): Sequential (
                   (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)
                   (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True)
             )
       )
       (1): BasicBlock (
            (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True)
            (relu): ReLU (inplace)
            (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
            (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True)
       )
   )
   (layer4): Sequential (
       (0): BasicBlock (
             (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
             (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
             (relu): ReLU (inplace)
             (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
             (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
             (downsample): Sequential (
                   (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
                   (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
             )
        )
        (1): BasicBlock (
             (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
             (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
             (relu): ReLU (inplace)
             (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
             (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
        )
    )
    (avgpool): AvgPool2d ()
    (fc): Linear (512 -> 1000)
)

I would like to make a branch in layer 1, Basic block 1 after conv2. Actually, I would like to use the output of this layer and make branch. One point which is so important for me is to use pretrained weight of resnet. Could you please help me how can I do that?

1 Like

you can look at the resnet models under the torchvision package

1 Like

Thanks for your response. But I could not get what you mean. could you please write for me some snippet here about how to do that?

Hi,
ResNet model definitions are here:

Look at _make_layer function and how the residual blocks were coded:

So you could adapt these however you want.

1 Like

Thanks for your response! Could you please write a concrete example. I mean please add a conv after bn2 in basic block 0 in `layer2. I would like to make a branch around that point

I mean please add a conv after bn2 in basic block 0 in `layer2.

examples are given as pointers for you to get going. You cant expect someone to write the exact example you want (or rather, do the work for you).

1 Like