Is there a uniform code for finetuning on different pretrained models?

I am using different pretrained models for comparing the extracted features, but it seems that for different model has different input size and different feature size. Is there a uniform code to perform this? Also, different model has different parameter name for last fc layer, for example, for densenet, I have to use model.classifier=nn.Linear(in_features, out_size).

1 Like

I’m afraid not. maybe try:

if hasattr(model,'fc'):
    model.fc = nn.Linear(model.fc.in_features,out_size)
elif hasattr(model,'classifier'):
    model.classifier = nn.Linear(model.classifier.in_features,out_size)
else:
	raise Exception('unknown classifier module-name')

But I can’t guarantee it.

Different models are defined differently, so you have to look at how the model is actually defined and tweak it to your need. I do not think there is a universal code applying to all models.

Different networks accept input data with different size. In this regard, it is considered to use scale transformation in order to change image size to desirable size for pretrained model. But some scientists say it is a problem. In this regard, some methods are proposed for this issue like Feature Pyramid Networks.

this is a paper that is used this method.

We know that each network has two section: feature map extractor (convolution layers) and classification (fully connected layers). This method is used between two sections, which makes the network independent from input size. But it reduces speed very much as I remember.

@JiaxiangZheng. We use different strides in an additional pooling layer to equalize dimensionality across architectures. It works really well.