Batch Norm during fine tuning

I am trying to fine tune the deeplabv3+ network on my own dataset which contains objects categories of VOC.

for tuning, I unfreeze just last 5 layers


        for param in model.parameters():
            param.requires_grad = False

      
        for param in list(model.parameters())[:-5]:
           param.requires_grad = True

I am getting a constant loss value in all of the epochs. Is there anything that needs to be taken care of with respect to batch normalisation and is this the correct way of fine tuning a model in pytorch.

It looks like you are unfreezing all but the last five layers in the model. Also note that model.parameters() will give you all parameters separately, e.g. the weight and bias if used in the particular layer, so you might want to use model.children() or make sure your slicing operation really captures the parameters of the last five layers.

1 Like