Add new layers to the pretrained model

I’m using VGG16 pre-trained model as my base model. However, I need to add a new layer (quantizing layer) before every convolution layer in VGG 16. How can I add this new layer before conv layers if I want to keep the value of VGG16 parameters (weights/Biases)?

1 Like

you need to write your own VGG16 then (better) and add the intermediate layers plus loading the weights for those specific original vgg layers as well?

1 Like

Thank you for your help.
the name of layers parameters are changed in my VGG16 model. How can I load parameter from pretrained VGG16 model with different parameter names ?

Here is pretrained parameter names

features.0.weight torch.Size([64, 3, 3, 3])
features.0.bias torch.Size([64])
features.2.weight torch.Size([64, 64, 3, 3])
features.2.bias torch.Size([64])
features.5.weight torch.Size([128, 64, 3, 3])
features.5.bias torch.Size([128])
features.7.weight torch.Size([128, 128, 3, 3])
features.7.bias torch.Size([128])
features.10.weight torch.Size([256, 128, 3, 3])

And Here my VGG16 parameter names - Step size is the name of parameter in my quantization layer

|features.0.step_size | torch.Size([1])|

|features.1.weight torch.Size([64, 3, 3, 3])|
|features.1.bias torch.Size([64])|
|features.3.step_size torch.Size([1])|
|features.4.weight torch.Size([64, 64, 3, 3])|
|features.4.bias torch.Size([64])|
|features.7.step_size torch.Size([1])|
|features.8.weight torch.Size([128, 64, 3, 3])|
|features.8.bias torch.Size([128])|
|features.10.step_size torch.Size([1])|

It’s a kinda tedious task, but you have to map the layer’s weights now, like what was the later in the actual VGG16 and what’s in your own implementation, just access the .weight/.bias attribute and load the weights from the pretrained version to yours…

2 Likes