How to perform finetuning on a Pytorch net

I’m using this implementation of SegNet in Pytorch, and I want to finetune it. I’ve read online and I’ve found this method (basically freezing all layers except the last one in your net). My problem is that SegNet has more than 100 layers and I’m looking for a simpler way to do it, rather than writing 100 lines of code.

Do you think this could work? Or is this utter nonsense?

import torch.optim as optim

model = SegNet()
for name, param in model.named_modules():
    if name != 'conv11d':    # the last layer should remain active
        param.requires_grad = False

optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.5)


def train():
    ...

How can I check if this is working as intended?

That looks good. Although I would also pass to the optimizer only the parameters of the last layer, i.e.

optimizer = optim.SGD(model.conv11d.parameters(), lr=0.01, momentum=0.5)

You can verify if its working by comparing the values of the weights and/or biases of some frozen layers and the last layer after a few iterations. You can access these parameters using your_module.weight and your_module.bias

2 Likes