How the pytorch freeze network in some layers, only the rest of the training?

You can call torch.nn.Module.requires_grad_ function for the corresponding modules when necessary.

import torch
import torch.nn as nn

model = nn.Conv2d(...)

# freeze model
model.requires_grad_(False)

...
# unfreeze model
model.requires_grad_(True)
3 Likes

if you are using a free trained model let say resnet50.

import torchvision.models as models
model = models.resnet50(pretrained=True)
For accessing different layers.
model.layer1
model.layer2
model.layer3
model.layer4

Setting gradient to False at layer1.

for param in model.layer1.parameters():
param.requires_grad = False

Confirming whether is frozen or not.

for param in model.layer1.parameters():
print(param.requires_grad)

Hope this would help.

Do you mean just add A.parameters() into optimizer when it was declared ?


frozen_layers = [model.module.down.ops.0.conv_block.adn.N.weight]
for layer in frozen_layers:
    for name, value in layer.named_parameters():
        value.requires_grad = False

  File "<ipython-input-31-a9e45f05e3b6>", line 2
    frozen_layers = [model.module.down.ops.0.conv_block.adn.N.weight]
                                           ^                                                          
SyntaxError: invalid syntax

I am facing issue to freeze layer named ‘module.down.ops.0.conv_block.adn.N.weight’ I think because of .0. in it?. Please let me know how to freeze such layers?

Thanks in advance!