Batchnorm with specified scale and bias value

Hi, recently I read a paper named “Large Scale GAN Training for High Fidelity Natural Image Synthesis”, where the scale and bias value are not learned by training, but come from Generation Conditions (Class label embedding and Noise z).

However, I didn’t find any cues to implement this trick/architecture after reading Pytorch documentation about Batchnorm. Do I need to implement it from scratch?

Thanks in advance :>

There are better experts to answer questions about this paper on the forums, but for the batch norm:
You can reuse the existing parts and add your own, for inspirations you might look at the conditional batch norm comments, and this one in particular.

Best regards

Thomas

1 Like

I will check these materials. Much appreciated.