What does param.requires_grad = False or True do in the Pretrained model

In the following code, what’s the role of param.requires_grad = True? what happens when param.requires_grad = False?

model = torchvision.models.resnet18(pretrained=True)

for param in model.parameters():
    param.requires_grad = True
    
model.fc  = nn.Sequential  (OrderedDict ([
                            ('fc1', nn.Linear (512, 256)),
                            ('relu1', nn.ReLU ()),
                            ('dropout1', nn.Dropout (p = 0.5)),
                            ('fc2', nn.Linear (256, 128)),
                            ('relu2', nn.ReLU ()),
                            ('dropout2', nn.Dropout (p = 0.5)),
                            ('fc3', nn.Linear (128, 10)), 
                            ('output', nn.LogSoftmax (dim =1))
                            ]))

1 Like

If requires_grad is set to false, you are freezing the part of the model as no changes happen to its parameters. In the example below, all layers have the parameters modified during training as requires_grad is set to true.

import torch, torchvision
import torch.nn as nn
from collections import OrderedDict

model = torchvision.models.resnet18(pretrained=True)
for param in model.parameters():
param.requires_grad =True

model.fc = nn.Sequential (OrderedDict ([
(‘fc1’, nn.Linear (512, 256)),
(‘relu1’, nn.ReLU ()),
(‘dropout1’, nn.Dropout (p = 0.5)),
(‘fc2’, nn.Linear (256, 128)),
(‘relu2’, nn.ReLU ()),
(‘dropout2’, nn.Dropout (p = 0.5)),
(‘fc3’, nn.Linear (128, 10)),
(‘output’, nn.LogSoftmax (dim =1))
]))

for name, param in model.named_parameters():
print('Name: ', name, ’ Requires_Grad: ', param.requires_grad)

Try changing the highlighted bold text in above code to False and you will see the model is frozen except for the last ‘fc’ layer, which you have modified.

2 Likes