How to use 'requires_grad = False'

Hi all.
I want to stop back-propagation in some layers.
For example, in the below network I tried to apply back-prop only up to model_2 and not to model_1.

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.model_1 = nn.Sequential(
            nn.Conv2d(in_channels=3, out_channels=32, kernel_size=5, padding=2),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2),
        )
        self.model_2 = nn.Sequential(
            nn.Conv2d(in_channels=32, out_channels=48, kernel_size=5, padding=2),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2)
        )

    def forward(self, x):
        x = self.model_1(x)
        x = self.model_2(x)
        x = x.view(-1, output_size)
        return x

I have searched about this and I have a idea.
: Seperate Model() to Model1() and Model2() and then apply with requires_grad.
Is it right to do what I want to do ?

model1 = Model1()
model2 = Model2()

for param in model1.parameters():
     param.requires_grad = False

optimizer = optim.SGD(model2.parameters(), lr =0.01, momentum=0.9)
and no optimizer for model1

I think your solution is correct. Try to run the code and take a breakpoint at the end of your codes to see if every parameter in Model1() have attribute requires_grad=False

You don’t have to set requires_grad = False if you don’t have an optimizer containing these parameters. Depending on your case, you could eventually wrap the calls of Model1 with with torch.no_grad() since your version still tracks the operations performed by the first model