Model.train and requires_grad

I was wondering if we need to do something for the optimizer to make optimizer not change the parameter of the freezed layer? Like:

torch.optim.Adam(filter(lambda p: p.requires_grad, self.netG.parameters()), lr=opt.lr,
                                                betas=(opt.beta1, 0.999))

https://discuss.pytorch.org/t/how-the-pytorch-freeze-network-in-some-layers-only-the-rest-of-the-training/7088/9