I’m training a model to reproduce images (in [0, 1]) but they are normalized (for vgg).
Should the output of my network be in [0,1] and then normalized as the ground truth ?
If so how can I apply torchvision.transforms.Normalize() on a whole batch of images ?
Else I just shouldn’t have activation at the output ?
Normalisation brings [0,1] to [-2.117904 2.64] so no sigmoid or relu posible.
Thanks a lot for your help.