How to add an additional layer on top of a pretrained model?

How can I append an additional layer on top of an existing pretrained architecture ? I saw this snippet below from Using freezed pretrained resnet18 as a feature extractor for cifar10 but it modifies the last layer instead of appending a new one to it.

model = torchvision.models.vgg19(pretrained=True)
for param in model.parameters():
    param.requires_grad = False
    # Replace the last fully-connected layer
    # Parameters of newly constructed modules have requires_grad=True by default
model.fc = nn.Linear(512, 8) # assuming that the fc7 layer has 512 neurons, otherwise change it 
model.cuda()

In a later part of the code, the model was called as follows when processing the input data

output = model(input)

2 Likes

A very simple way is

model = nn.Sequential(models.vgg19(True), YourModule())
# Use it as usual
out = model(input)
13 Likes

Hi,
Also is it possible to add one conv layer to vgg19 in this way?
Thanks!

hi. What is “YourModule()”? Could you pls explain to me?

YourModule() refers to any function that returns a nn.Module. So that’s just a placeholder to say that you can insert whatever model you want in there.

1 Like

In this snippet how do we make sure that the output dimension models.vgg19 is in sync with the input dimension of YourModule()?

1 Like

I have the same problem.

:pray: :pray: :pray: Thanks a lot RicCu. I was stuck in trying to modify the original end-classifier outputting 600 features in a hybrid classical-quantum transfer learning model. I needed a binary output. I used : model_hybrid = nn.Sequential(model, DressedQuantumNet()), and it worked!