New layer in an existing pretrained model

Hello,

I would like to add to an existing model, like AlexNet, additional layers.
My custom layer only changes the activation values, and does not affect any dimension.
My layer includes a 2D convolution but with constant weights (I set requires_grad=False).

The thing is that I’m having a hard time loading the pretrained values.
I’m looking at model.state_dict() and I can see that my layer affects the state_dict, in contrast to pooling or ReLU that doesn’t appear in it.

How do I force my layer to not change the original state_dict (like pooling or activation functions)?

Thanks.

I don’t understand the question.
Ofc your need layer changes state_dic, it has to appear there.

If the problem is you are having troubles loading the pretrained model try to use the argument strict=false when loading the pretrained model. This allows you to load weights without having the exact order. Just use a singular name for your new layers not get error while loading.

Unfortunately, because AlexNet is implemented with sequential any additional layer changes the layers names, so strict=false doesn’t work.

I can manually remove and rename keys from the state_dict. I wonder if there’s a better way.