I was wondering if we need to do something for the optimizer to make optimizer not change the parameter of the freezed layer? Like:
torch.optim.Adam(filter(lambda p: p.requires_grad, self.netG.parameters()), lr=opt.lr,
betas=(opt.beta1, 0.999))