Simple way to inverse transform ? Normalization

Hi all!
I’m using torchvision.transforms to normalize my images before sending them to a pre trained vgg19.
Therefore I have the following:

normalize = transforms.Normalize(mean = [ 0.485, 0.456, 0.406 ],
                      std = [ 0.229, 0.224, 0.225 ])

My process is generative and I get an image back from it but, in order to visualize, I’d like to “un-normalize” it.
Is there a simple way, in the API, to inverse the normalize transform ?
Or should it be coded by hand ?

Also I’m a bit surprise that the process works really fine without any normalization step.
The whole thing is about style transfer, from this paper: https://arxiv.org/abs/1508.06576, and there’s a nice pytorch implementation outhere (not mine) here: https://github.com/alexis-jacq/Pytorch-Tutorials.

That implementation doesn’t normalize anything before feeding images to vgg19 and the results are OK.
Basically vgg19 is used to extract features from feeded images.
Your thoughts on why it stills works ?

4 Likes

did you find any solution for your problem yet? I also normalize before training, to get better loss values, but the generated images look very dark and in terms of colors strange.

Hey
A way to reverse the normalization does not seem to exist. However it is pretty straightforward to create a simple class that does so.

class UnNormalize(object):
    def __init__(self, mean, std):
        self.mean = mean
        self.std = std

    def __call__(self, tensor):
        """
        Args:
            tensor (Tensor): Tensor image of size (C, H, W) to be normalized.
        Returns:
            Tensor: Normalized image.
        """
        for t, m, s in zip(tensor, self.mean, self.std):
            t.mul_(s).add_(m)
            # The normalize code -> t.sub_(m).div_(s)
        return tensor

You instantiate it with the same arguments used for the normalize. and then use it the same way

unorm = UnNormalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225))
unorm(tensor)
30 Likes

Most easiest way would be:

invTrans = transforms.Compose([ transforms.Normalize(mean = [ 0., 0., 0. ],
                                                     std = [ 1/0.229, 1/0.224, 1/0.225 ]),
                                transforms.Normalize(mean = [ -0.485, -0.456, -0.406 ],
                                                     std = [ 1., 1., 1. ]),
                               ])

inv_tensor = invTrans(inp_tensor)
11 Likes

A more concise approach based on Saurabh’s answer:

inv_normalize = transforms.Normalize(
    mean=[-0.485/0.229, -0.456/0.224, -0.406/0.255],
    std=[1/0.229, 1/0.224, 1/0.255]
)
inv_tensor = inv_normalize(tensor)
4 Likes

You’ve adjusted the mean wrongly
the mean should be
mean=[-0.485 * 0.229, -0.456 * 0.224, -0.406 * 0.255]

because the renormalization needs the answer to be (X/sigma - mu) = (X - mu*sigma)/sigma

The inverse normalization should be
x = z*sigma + mean
= (z + mean/sigma) * sigma
= (z - (-mean/sigma)) / (1/sigma),
since the normalization process is actually z = (x - mean) / sigma if you look carefully at the documentation of transforms.Normalize. I tested my original code and it worked fine.

6 Likes

Keep in mind, torchvision.transforms.Normalize's operations are in-place. If you want immutable implementation,

class NormalizeInverse(torchvision.transforms.Normalize):
    """
    Undoes the normalization and returns the reconstructed images in the input domain.
    """

    def __init__(self, mean, std):
        mean = torch.as_tensor(mean)
        std = torch.as_tensor(std)
        std_inv = 1 / (std + 1e-7)
        mean_inv = -mean * std_inv
        super().__init__(mean=mean_inv, std=std_inv)

    def __call__(self, tensor):
        return super().__call__(tensor.clone())
6 Likes

Thank you.
But there is a typo in the third element of mean.
It’s 0.225 not 0.255

inv_normalize = transforms.Normalize(
    mean=[-0.485/0.229, -0.456/0.224, -0.406/0.225],
    std=[1/0.229, 1/0.224, 1/0.255]
)
inv_tensor = inv_normalize(tensor)

There is still a typo in the std :smiley:

inv_normalize = transforms.Normalize(
   mean=[-0.485/0.229, -0.456/0.224, -0.406/0.225],
   std=[1/0.229, 1/0.224, 1/0.225]
)
inv_tensor = inv_normalize(tensor)
1 Like

you can also use kornia.denormalise
https://kornia.readthedocs.io/en/latest/color.html#kornia.color.Denormalize

A slightly modified version of the above code

This suppose a defined mean and std.

inv_normalize = transforms.Normalize(
   mean= [-m/s for m, s in zip(mean, std)],
   std= [1/s for s in std]
)

inv_tensor = inv_normalize(tensor)
1 Like

Sorry to bother, but this doesn’t seem to work for me. As inv_normalize(tensor) will raise a TypeError complaining tensor is not a torch image.

Is this a version issue? Thanks in advance!

Could you check the shape and type of the input you are passing to this method?
I guess the number of dimensions of type might be unexpected and thus Normalize might raise this issue.