How to do transfer learning by freezing certain layers on different architectures

Hi,
I am trying to do binary classification using transfer learning.
In the process, I want to experiment with freezing/unfreezing different layers of different architectures but so far, I am able to freeze/unfreeze entire models only.
Can anyone help me in illustrating it with a couple of model architectures?
Below, I am using Timm and a couple of architectures - convnext and resnet -

import timm
convnext = timm.create_model('convnext_tiny_in22k', pretrained=True,num_classes=2) # noqa
resnet = timm.create_model('resnet50d', pretrained=True,num_classes=2)

How to find which layers to freeze/unfreeze?
Thanks.

Typically, the last few layers are unfrozen (e.g. just the “head”) and the remainder remains frozen.

Best regards

Thomas

1 Like