Insert New Layer in the Middle of a Pre-trained Model


(Wangchust) #1

Help!!! Does anyone knows how to insert a new layer in the middel of a pre-trained model? e.g insert a new conv in the middle of Resnet’s bottelneck.


(Kai Arulkumaran) #2

There’s no easy way to insert a new layer in the middle of an existing model as far as I’m aware. A definite solution is to build the structure that you want in a new class and then copy the corresponding weights over from the pretrained model.


(Wangchust) #3

Figured!!!
After loading model,we can directly specify model.conv_x = nn.Sequential([new_layer, model.conv_x]), by this way, we can still use thepretrained model.conv_x


(Shirish (Sam) Ranade) #4

@wangchust
Can you help a newbie like me. I import pretrained resnest34 as:

resnet = models.resnet34(pretrained=True)

Now I want to insert a conv2d 1x1 kernel layer before the fc to increase channel size from 512 to 6000 and then add a fc 6000 x 6000

I am so new to pytorch that I need some hand-holding. Can you write the lines of code needed? I am still at monkey-see monkey-learn stage. Thanks in anticipation


#5

I think the easiest approach would be to derive from ResNet and add your layers.
This should do what you need:


class MyResnet2(models.ResNet):
    def __init__(self, block, layers, num_classes=1000):
        super(MyResnet2, self).__init__(block, layers, num_classes)
        self.conv_feat = nn.Conv2d(in_channels=512, 
                                   out_channels=6000, 
                                   kernel_size=1)
        self.fc = nn.Linear(in_features=6000,
                            out_features=6000)
        
    def forward(self, x):
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)
        x = self.maxpool(x)

        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x)

        x = self.avgpool(x)
        x = self.conv_feat(x)
        x = x.view(x.size(0), -1)
        x = self.fc(x)

        return x

from torchvision.models.resnet import BasicBlock
model = MyResnet2(BasicBlock, [3, 4, 6, 3], 1000)
x = Variable(torch.randn(1, 3, 224, 224))
output = model(x)

Note that the shape of x in x= self.avgpool(x) is already [batch, 512, 1, 1] for x.shape = [batch, 3 ,224, 224].
You could therefore flatten x and just use two Linear layers, since it would be the same as a Conv2d with kernel_size=1.


(Shirish (Sam) Ranade) #6

@ptrblck
Thank you sir!,
Now if I re-read some of the tutorials, they will register in my head.


(Wangchust) #7

Sorry for being so late to reply. My way should be replace ‘Resnet.fc = nn.Linear(512, num_classes)’ with ‘Resnet.fc = nn.Sequential(nn.Conv2d(512, 6000), nn.Linear(6000,6000))’


#8

Using this approach you would have to define a Flatten layer, since it will crash in the Sequential model.

class Flatten(nn.Module):
    def __init__(self):
        super(Flatten, self).__init__()
        
    def forward(self, x):
        x = x.view(x.size(0), -1)
        return x
        

seq = nn.Sequential(nn.Conv2d(512, 6000, 1),
                    Flatten(),
                    nn.Linear(6000,6000))

x = Variable(torch.randn(1, 512, 1, 1))
seq(x)

(Shirish (Sam) Ranade) #9

@ptrblck and @wangchust,

Thank you both.