Normalisation in DCGANs which is not mean of 0.5 and std of 0.5

In the DCGAN tutorial (DCGAN Tutorial — PyTorch Tutorials 1.12.0+cu102 documentation), they use normalisation with means of (0.5, 0.5, 0.5) and standard deviations of (0.5, 0.5, 0.5). This happens to work nicely, as the minimum and maximum values of an image, 0 and 1, are both 1 standard deviation away from the mean. Therefore after normalisation, real images will be between -1 and 1, which is the co-domain of tanh.

If I instead use the ImageNet values for the mean and standard deviation, (0.485, 0.456, 0.406) and (0.229, 0.224, 0.225), suddenly normalised real images map to between (-2.11790393, -2.03571429, -1.80444444) and (2.2489, 2.4286, 2.6400).

I could multiply the output of the generator by (2.2489, 2.4286, 2.6400) and take the max of that and (-2.11790393, -2.03571429, -1.80444444), but this sounds like a terrible idea as it zeros the gradient for dark colours.

I could subtract the means from the output of the generator and divide by the standard deviation, but this defeats the point of normalised values in the first place, since the generator would be outputting an effectively unnormalised value

Does anyone know the way to get around this issue?