Dear PyTorch fellows,
To give a little bit of context, I’m working with Generative Models (Generative Adversarial Networks- GAN) achieving image to image translation. In my case input and output are images.
To help the model converge, I’m using a semantic segmentation model on the input and the output of the GAN and compute a cross Entropy loss on specific region so that the semantic of the image is kept during the translation.
My question is the following: since my GAN requires a different normalization from my semantic segmentation model:
transform_GAN = transforms.Compose([
transforms.RandomHorizontalFlip(),
transforms.Resize(256),
transforms.RandomCrop(256),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5),
(0.5, 0.5, 0.5))
])
transform_seg = transforms.Compose(
[
transforms.ToPILImage(),
transforms.Resize((512,512)),
transforms.ToTensor(),
transforms.Normalize((0.485, 0.456, 0.406),\
(0.229, 0.224, 0.225))
])
Shall I denormalize the output of the first one before applying the transformation of the second ?
Thank you for your help,
Gautier