When freeze layers of Resnet50(pretrained), are there total 10 layers to freeze?

I found a good explanation of freezing resnet50 layers.
I am wondering that as far as I know, there are 50 layers inside resnet50, evev if I consider conv layers and fc layer, at least there are 20 layers to freeze grads.
Please teach me how come resnet50 total 10 layers as we freeze the model.
Thank you in advance!! :slight_smile:

Below is the post I saw!

model_ft = models.resnet50(pretrained=True)
ct = 0
for child in model_ft.children():
    ct += 1
    if ct < 7:
        for param in child.parameters():
            param.requires_grad = False
This freezes layers 1-6 in the total 10 layers of Resnet50

You can print the model and would see all defined layers of it.
This paper shows the architecture in Figure 3 for ResNet34 and you can see that each “conv block” is defined as a layer.

1 Like

Thanks for the response and time. I now understand the conv clock representation from the paper. I am just curious as well to know why there are 10 children (which everyone translates as layers) instead of 50 in resnet50 or say 34 for resnet34.


Each registered block is counted as a child module and contains more registered modules internally. If you would flatten these blocks you’ll get the corresponding 50 or 34 layers. The current implementation reuses common blocks for a better and clean coding style instead of manually defining each of the 50 layers separately.

1 Like