Insert New Layer in the Middle of a Pre-trained Model

Help!!! Does anyone knows how to insert a new layer in the middel of a pre-trained model? e.g insert a new conv in the middle of Resnet’s bottelneck.


There’s no easy way to insert a new layer in the middle of an existing model as far as I’m aware. A definite solution is to build the structure that you want in a new class and then copy the corresponding weights over from the pretrained model.

1 Like

After loading model,we can directly specify model.conv_x = nn.Sequential([new_layer, model.conv_x]), by this way, we can still use thepretrained model.conv_x


Can you help a newbie like me. I import pretrained resnest34 as:

resnet = models.resnet34(pretrained=True)

Now I want to insert a conv2d 1x1 kernel layer before the fc to increase channel size from 512 to 6000 and then add a fc 6000 x 6000

I am so new to pytorch that I need some hand-holding. Can you write the lines of code needed? I am still at monkey-see monkey-learn stage. Thanks in anticipation


I think the easiest approach would be to derive from ResNet and add your layers.
This should do what you need:

class MyResnet2(models.ResNet):
    def __init__(self, block, layers, num_classes=1000):
        super(MyResnet2, self).__init__(block, layers, num_classes)
        self.conv_feat = nn.Conv2d(in_channels=512, 
        self.fc = nn.Linear(in_features=6000,
    def forward(self, x):
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)
        x = self.maxpool(x)

        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x)

        x = self.avgpool(x)
        x = self.conv_feat(x)
        x = x.view(x.size(0), -1)
        x = self.fc(x)

        return x

from torchvision.models.resnet import BasicBlock
model = MyResnet2(BasicBlock, [3, 4, 6, 3], 1000)
x = Variable(torch.randn(1, 3, 224, 224))
output = model(x)

Note that the shape of x in x= self.avgpool(x) is already [batch, 512, 1, 1] for x.shape = [batch, 3 ,224, 224].
You could therefore flatten x and just use two Linear layers, since it would be the same as a Conv2d with kernel_size=1.


Thank you sir!,
Now if I re-read some of the tutorials, they will register in my head.

1 Like

Sorry for being so late to reply. My way should be replace ‘Resnet.fc = nn.Linear(512, num_classes)’ with ‘Resnet.fc = nn.Sequential(nn.Conv2d(512, 6000), nn.Linear(6000,6000))’

1 Like

Using this approach you would have to define a Flatten layer, since it will crash in the Sequential model.

class Flatten(nn.Module):
    def __init__(self):
        super(Flatten, self).__init__()
    def forward(self, x):
        x = x.view(x.size(0), -1)
        return x

seq = nn.Sequential(nn.Conv2d(512, 6000, 1),

x = Variable(torch.randn(1, 512, 1, 1))
1 Like

@ptrblck and @wangchust,

Thank you both.

I find this very useful to create a custom model which inserts a new layer in middle of ResNet, but how do we get the pretrained weights? This is giving me a new neural net which doesn’t have the pre trained weights.
Thank you.

I want to add attention layers in the pretrained resnet model how can I do so after every resnet block in the model.

This is what I did, I think it works.

vgg11.features[0] = nn.Sequential(inserted_layer, vgg11.features[0])

I checked both parameters, and I think the pretrained parameters are preserved.

I first divided the model into two parts; 1st half are the layers before the layer you want to add and the 2nd half contains the layers after your layer. So something like this:

encoder = nn.Sequential(*list(m.children())[:8])
decoder  = nn.Sequential(*list(m.children())[8:])

Then I added the layers I need, get a list of the layers and append each layer in the 2nd layer into the 1st one. Like this:

tmp_1 = list(encoder.children())
tmp_2 = list(decoder.children())
for i in tmp_2:

And then model = nn.Sequential(*list(tmp_1))