I am trying to fine tune the deeplabv3+ network on my own dataset which contains objects categories of VOC.
for tuning, I unfreeze just last 5 layers
for param in model.parameters():
param.requires_grad = False
for param in list(model.parameters())[:-5]:
param.requires_grad = True
I am getting a constant loss value in all of the epochs. Is there anything that needs to be taken care of with respect to batch normalisation and is this the correct way of fine tuning a model in pytorch.