Mixed precision with generative adversarial network(GAN)

Hi guys, I am a new comer for deep learning and I am working on GAN now, I want to ask that if I use torch.cuda.amp.autocast and torch.cuda.amp.GradScaler while training, will it really inference my model appearance??

Could you show some examples that GAN’s performance with fp16?

by the way I didn’t use torch.nn.parallel, I just assign each generator to a GPU, and send the variable from GPU1 to GPU0 and from GPU0 to GPU1 and so on, will I lose some details in the loss backwarding and calculating??