How can we rebuild images from a Generator in a GAN model?

I followed the paper : https://arxiv.org/pdf/1511.06434.pdf to build a GAN model from scratch. The version is very similar to the tutorial available on PyTorch: DCGAN Tutorial — PyTorch Tutorials 2.0.0+cu117 documentation

There is one thing that confuses me though, the Generator transforms noise into a fake image, however, the Generator uses the tanh activation function, thus the values of the fake image are in the range between -1 and 1. Displaying this as an RGB image, don’t we require values between 0 and 255? On the tutorial I cannot really detect that step from (-1, 1) to (0, 255).

After training the Generator for some time, how can we recover the pixels from those values to actually see a real image? I think I lack some knowledge here, so sorry if that question is rather stupid or basic!

The tutorial uses torchvision.utils.make_grid with normalize=True, which will - according to the docs - normalize the image to the range [0, 1] using the min and max values from value_range. By default value_range will be set to the min/max values of the input tensor unless its set to specific values:

  • normalize (bool, optional) – If True, shift the image to the range (0, 1), by the min and max values specified by value_range. Default: False.
  • value_range (tuple, optional) – tuple (min, max) where min and max are numbers, then these numbers are used to normalize the image. By default, min and max are computed from the tensor.

Here is a code example showing this behavior:

output = (torch.rand(3, 224, 224)  * 2.) - 1.
print(output.min())
# tensor(-1.0000)
print(output.max())
#tensor(1.0000)

out = make_grid(output)
print(out.min())
# tensor(-1.0000)
print(out.max())
# tensor(1.0000)

out = make_grid(output, normalize=True)
print(out.min())
# tensor(0.)
print(out.max())
# tensor(1.0000)