DCGAN ReLU vs. Leaky ReLU

I noticed that in DCGAN implementation Generator has ReLU but Discriminator has leaky ReLU - any reason for the difference?

Also - anyone knows why the Discriminator 1st layer doesn’t have BN ?

Please read the original paper to understand the architecture choices. Most of them are decided empirically.

Training with GAN is very unstable. And I don’t think there is legit reason behind each choice. As @mariosasko mentioned, those are empirically picked after many experiments (hyper-parameters tuning).