Finetuning the convnet question, can i change the cells in a fc layer?

Sorry if my questions are stupid. I am a beginner.

I have a few questions, maybe a few very basic questions…

I see in the transfer learning tutorial we are using :

model_ft = models.resnet18(pretrained=True)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(512, 2)

  1. Are we fine tuning the layer with 512 filters?
    If not, what should I do to fine tune both last layer and the fully connected layer before that, which has 512 filters

  2. How I can change the number 512, meaning: if I want to finetune the network in a way that num_ftrs change to 100, what should I do? Is that even possible?
    I receive an error when I do ``model_ft.fc = nn.Linear(100, 2)` .
    Here is the error: RuntimeError: size mismatch, m1: [4 x 512], m2: [100 x 2] at /pytorch/torch/lib/TH/generic/THTensorMath.c:1293

  3. My last question is that if I wanna finetune a network that has more than one fully connected net, what should I do? e.g. Lets say I want to finetune vgg and I wanna finetune the last two fc layers.

Thank you a lot in advance

  1. Models are pretrained on ImageNet which has way too many classes.We are fine tuning the entire network but the example only has 2 classes (bees and ants) thus we change the last layer to only output 2 probabilities.

  2. I think the actual code should be model_ft.fc = nn.Linear(num_ftrs, 2). It is basically creating a fc layer connecting all the input features to a linear layer with 2 output. You can add another and intermediate fc to give you 100 features.

  3. As I said unless you are fixing all the model parameters (change requires_grad to False), you are actually fine tuning the entire model and not just the last layer.

1 Like

I see your point!!!
Regarding the # 2, can you tell me how I can add another an intermediate fc to give me 100 features?

I believe you can do something like this:

model = models.vgg16(pretrained=True)
#list all the modules in the model's classifier
mod = list(model_ft.classifier.children())
#Pop the last module and add 2 modules
mod.pop()
mod.append(nn.Linear(4096,100))
mod.append(nn.Linear(100,num_of_classes))

#Replace vgg16's classifier with this new classifier
new_classifier = nn.Sequential(*mod)
model.classifier = new_classifier