Is it possible to load weights from a pre-trained model layer by layer?

I trained a ResNet-50 + MLP (2 nn.Linear) model. Next, I want to use the same model, except that I now have a Dropout layer in between the 2 nn.Linear layers in the MLP. But I also want to use the weights from the previous model.

I am receiving an error because the model changes and ‘load_state_dict’ cannot load the weights.

Is it possible to load the weights in this manner?

I attempted to do like this:

for n,p in model.named_parameters():
    try:
        p = model_state_dict[n]
    except:
        p = model_state_dict['projector.2.weight']

The name of the key in the saved model dict is ‘projector.2.weight’ while it becomes ‘projector.3.weight’ in the new model with dropout.

Also, weights of other layers not correctly loaded using this piece of code. Any suggestions??

You could either load the state_dict into the model before applying any manipulations, change the state_dict keys to map your new modules names, or load the parameters layer-wise, which would most likely also need a mapping between the currently modified model and the pretrained state_dict.

I don’t know how you are manipulating the model, but I think the first approach might be the easiest one.
E.g. you could write a custom nn.Module, use the pretrained model inside it, add the new layer(s), and change the forward method as needed.