Given vgg16, how to remove pool5 layer and all the classify layers? And how to add new layers to that while keeping the pretrained weights?

pretrained_model = torchvision.models.vgg16(pretrained=True)

  1. given vgg16, how to remove pool5 layer and all the classify layers?(only keep the conov1_1 to relu5_3 layers)
  2. how to add new layers ,and with new layers with random initialize parameters and the conv1_1 ~ relu5_3 keeping the pretrained weights?

You can do something like this:

class EncoderCNN(nn.Module):

    def __init__(self):
        super(EncoderCNN, self).__init__()
        self.vgg = models.vgg16()
        self.vgg.load_state_dict(torch.load(vgg_checkpoint))
        self.vgg.classifier = nn.Sequential(
            *(self.vgg.classifier[i] for i in range(6)))

    def forward(self, images):
        return self.vgg(images)

VGG net can be viewd as the combination of two sub-nets: feature extracting net and classifying net, and each of them is a nn.Sequential module. I just remove the last fc layer in classifying net by constructing a new nn.Sequential module with the pretrained parameters.

For your requirement, I guess you can do it like this:

    def __init__(self):
        super(EncoderCNN, self).__init__()
        self.vgg = models.vgg16()
        self.vgg.load_state_dict(torch.load(vgg_checkpoint))
        self.vgg.features = nn.Sequential(
            *(self.vgg.features[i] for i in range(30))

    def forward(self, images):
        return self.vgg.feature(images)

Sorry that this method looks so ugly. The nn.Sequential object does not support slice so I have to construct a new Sequential by list comprehension.

I guess the following codes do the same thing, right?
pretrained_model = torchvision.models.vgg16(pretrained=True)
modified_pretrained = nn.Sequential(*list(pretrained_model.features.children())[:-1]) # to relu5_3

2 Likes

Yes! Your implementation looks much more elegant, thank you!

For some reason, this method doesn’t seem to work for me. I receive the following error:

RuntimeError: size mismatch, m1: [3584 x 7], m2: [25088 x 4096] at /pytorch/torch/lib/TH/generic/THTensorMath.c:1293

Did you face a similar issue? I don’t know why this should happen because after all, it’s using all the layers in a sequential fashion.

Hi, how can I use this method for pretrained resnet18? Currently the same method gives error as - ‘ResNet’ object has no attribute ‘features’.

1 Like

Yeah, I got the same result…

This is because there is no module in the pre-trained model named as features. “features” is one of the modules of VGG(the initial example of this thread). To see the module names, just simple print your model.

my_model = models.ResNet18(pre_trained = False, progressed =True)
print(my_model)