Hi,
Actually no, don’t want to train netD* at all, all the losses are for training the netG, and all netD* are fixed.
The last point of your reply kind of hit the point, I want to propagate through netG without propagating through netD*.
Why I thought it might work is because of this post Freezing parameters - #2 by ebetica
.requires_grad = False looks promising, but I am not sure either, would be great if someone can clarify on that.
@apaszke any thoughts on this?
Regards
Nabarun