Adding layers between layers of a pretrained model

import torchvision.models as models
base_model = models.densenet161(pretrained=True).features

How can I insert additional layers into the pre-trained model above? In densenet, the input will go through conv0 layer, followed by denseblock1, …, denseblock4 before outputting 2208 channels feature maps. I would like to insert convolution layers right after conv0 and denseblock1 layers, any idea how i can do it?

You could try to manipulate the model initialization here and add your custom modules. However, this would of course disallow loading the pretrained state_dict directly, and you could either use the strict=False approach (I wouldn’t recommend it without a proper verification as it could easily break if you are not careful), manipulate the state_dict to match the new keys in your model, or manually load the pretrained parameters.

1 Like

Thanks for the answer! I am wondering if I could add the layers after loading the pre-trained model. I tried adding my custom layer inside the pre-trained densenet like codes below, does it make sense? or i must manipulate the model initialization ?

class encoder(nn.Module):

def __init__(self, params):

    super(encoder, self).__init__()

    self.params = params

    import torchvision.models as models


    self.base_model = models.densenet161(pretrained=True).features

    print("---------------- adding custom layers ------------------------")

    self.base_model.conv0.add_module("custom1", CUSTOM_LAYER(96, 96, para=(1,2,3,4)))

    self.base_model.denseblock1.denselayer6.add_module("custom2", CUSTOM_LAYER(384, 384, para=(1,2,3,4)))

    self.base_model.denseblock2.denselayer12.add_module("custom3", CUSTOM_LAYER(768, 768, para=(1,2,3,4)))

def forward(self, x):

    feature = x

    for k, v in self.base_model._modules.items():

        feature = v(feature)

    return feature

Yes, your approach also looks valid and might be even the easiest way. :slight_smile: