Model freeze explanation (transfer learning)

Hi, i have a question, because i’m new in DNN, so i read term about transfer learning, there are stragtegies to do a transfer learning, like we freeze our base net layer or just classification layer, i want to know if there are simple explanation behind this (would appreciate with math explanation), what happend to the frozen layer while training?
i read github code to freeze some layer like this:

def freeze_net_layers(net):
    for param in net.parameters():
        param.requires_grad = False

what is requres_grad? what happen if we set the value true?

In a model you only define the forward propogation and Pytorch handles the backpropogation automatically for you with loss.backwards(). When you’re setting param.requires_grad = False you’re telling Pytorch not to do the backpropogation for those layers.

Essentially what happened to the frozen layer during training is you’re doing the forward propogation part, but Pytorch no longer does the backpropogation part and the parameters aren’t updated. They are “frozen”, usually you would freeze a certain amount of layers and perhaps train the last 3-4 layers. This is also called fine tuning.

1 Like