Firstly, in the GAN updating of the G network, G is frozen to train D, and then D is frozen to train G sequentially. To do so, the detach() function is used to avoid backpropping to G as seen in
However, there doesn’t seem to be a need to implement detach() when getting the output from the discriminator, to prevent updates to the discriminator - why? This is seen in:
In this case, is netD frozen, while G is being updated?
DCGAN Generator Initial Projection Layer
Secondly, why is there not a need to project the vector into an FC layer and reshape it before performing the first conv2dtranspose, as seen in the original paper? The current implementation simply performs a conv2dtranspose on an input vector:
The projection layer is something that is quite commonly implemented in TensorFlow for DCGANs, for example:
Is there something about conv2dTranspose I’m missing?