I have trained a Generator named netG (which is fixed in the following code). Given x, then I tried use adam optimizer to optimize the following function to get the z:
No, I am expecting the this z should not be zero all the time. My loss decreasing does depend on this tensor. That is why I am expecting the loss to decrease.
In that case you have to write code in your netG class to ensure that gradients on z are propagated (I am not sure if this is the correct term; what I mean is that you should tell PyTorch how gradients with respect to z change, when you call netG.forward(z).).
This is not something that I have done (yet!), so I am not entirely clear on how to do this. I think you need to implement a backward function inside your netG class. The code in this message may help you get started.