Simple way to inverse transform ? Normalization

Hi all!
I’m using torchvision.transforms to normalize my images before sending them to a pre trained vgg19.
Therefore I have the following:

normalize = transforms.Normalize(mean = [ 0.485, 0.456, 0.406 ],
                      std = [ 0.229, 0.224, 0.225 ])

My process is generative and I get an image back from it but, in order to visualize, I’d like to “un-normalize” it.
Is there a simple way, in the API, to inverse the normalize transform ?
Or should it be coded by hand ?

Also I’m a bit surprise that the process works really fine without any normalization step.
The whole thing is about style transfer, from this paper: https://arxiv.org/abs/1508.06576, and there’s a nice pytorch implementation outhere (not mine) here: https://github.com/alexis-jacq/Pytorch-Tutorials.

That implementation doesn’t normalize anything before feeding images to vgg19 and the results are OK.
Basically vgg19 is used to extract features from feeded images.
Your thoughts on why it stills works ?

14 Likes

did you find any solution for your problem yet? I also normalize before training, to get better loss values, but the generated images look very dark and in terms of colors strange.

2 Likes

Hey
A way to reverse the normalization does not seem to exist. However it is pretty straightforward to create a simple class that does so.

class UnNormalize(object):
    def __init__(self, mean, std):
        self.mean = mean
        self.std = std

    def __call__(self, tensor):
        """
        Args:
            tensor (Tensor): Tensor image of size (C, H, W) to be normalized.
        Returns:
            Tensor: Normalized image.
        """
        for t, m, s in zip(tensor, self.mean, self.std):
            t.mul_(s).add_(m)
            # The normalize code -> t.sub_(m).div_(s)
        return tensor

You instantiate it with the same arguments used for the normalize. and then use it the same way

unorm = UnNormalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225))
unorm(tensor)
55 Likes

Most easiest way would be:

invTrans = transforms.Compose([ transforms.Normalize(mean = [ 0., 0., 0. ],
                                                     std = [ 1/0.229, 1/0.224, 1/0.225 ]),
                                transforms.Normalize(mean = [ -0.485, -0.456, -0.406 ],
                                                     std = [ 1., 1., 1. ]),
                               ])

inv_tensor = invTrans(inp_tensor)
31 Likes

A more concise approach based on Saurabh’s answer:

inv_normalize = transforms.Normalize(
    mean=[-0.485/0.229, -0.456/0.224, -0.406/0.255],
    std=[1/0.229, 1/0.224, 1/0.255]
)
inv_tensor = inv_normalize(tensor)
9 Likes

You’ve adjusted the mean wrongly
the mean should be
mean=[-0.485 * 0.229, -0.456 * 0.224, -0.406 * 0.255]

because the renormalization needs the answer to be (X/sigma - mu) = (X - mu*sigma)/sigma

1 Like

The inverse normalization should be
x = z*sigma + mean
= (z + mean/sigma) * sigma
= (z - (-mean/sigma)) / (1/sigma),
since the normalization process is actually z = (x - mean) / sigma if you look carefully at the documentation of transforms.Normalize. I tested my original code and it worked fine.

8 Likes

Keep in mind, torchvision.transforms.Normalize's operations are in-place. If you want immutable implementation,

class NormalizeInverse(torchvision.transforms.Normalize):
    """
    Undoes the normalization and returns the reconstructed images in the input domain.
    """

    def __init__(self, mean, std):
        mean = torch.as_tensor(mean)
        std = torch.as_tensor(std)
        std_inv = 1 / (std + 1e-7)
        mean_inv = -mean * std_inv
        super().__init__(mean=mean_inv, std=std_inv)

    def __call__(self, tensor):
        return super().__call__(tensor.clone())
10 Likes

Thank you.
But there is a typo in the third element of mean.
It’s 0.225 not 0.255

inv_normalize = transforms.Normalize(
    mean=[-0.485/0.229, -0.456/0.224, -0.406/0.225],
    std=[1/0.229, 1/0.224, 1/0.255]
)
inv_tensor = inv_normalize(tensor)
3 Likes

There is still a typo in the std :smiley:

inv_normalize = transforms.Normalize(
   mean=[-0.485/0.229, -0.456/0.224, -0.406/0.225],
   std=[1/0.229, 1/0.224, 1/0.225]
)
inv_tensor = inv_normalize(tensor)
3 Likes

you can also use kornia.denormalise
https://kornia.readthedocs.io/en/latest/color.html#kornia.color.Denormalize

1 Like

A slightly modified version of the above code

This suppose a defined mean and std.

inv_normalize = transforms.Normalize(
   mean= [-m/s for m, s in zip(mean, std)],
   std= [1/s for s in std]
)

inv_tensor = inv_normalize(tensor)
3 Likes

Sorry to bother, but this doesn’t seem to work for me. As inv_normalize(tensor) will raise a TypeError complaining tensor is not a torch image.

Is this a version issue? Thanks in advance!

Could you check the shape and type of the input you are passing to this method?
I guess the number of dimensions of type might be unexpected and thus Normalize might raise this issue.

1 Like

following @joels method, it worked for me.

def inverse_normalize(tensor, mean, std):
    for t, m, s in zip(tensor, mean, std):
        t.mul_(s).add_(m)
    return tensor

input_tensor= inverse_normalize(tensor=input_tensor, mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225))
3 Likes

I found out that even after performing the inverse normalization, the inverse normalized opencv imag is not equivalent to an original opencv via cv2.imread. I am sticking to getting a mask from pytorch model output and multiplying it with original frame via cv2.imread

like this,

def inverse_normalize(tensor, mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)):
    mean = torch.as_tensor(mean, dtype=tensor.dtype, device=tensor.device)
    std = torch.as_tensor(std, dtype=tensor.dtype, device=tensor.device)
    if mean.ndim == 1:
        mean = mean.view(-1, 1, 1)
    if std.ndim == 1:
        std = std.view(-1, 1, 1)
    tensor.mul_(std).add_(mean)
    return tensor
3 Likes

Works like charm. Normalization+unnormalization give original image.

this code seems simpler

If you want to reverse the normalization, all you need to do is to use a new normalization, with slight modifications:

mean = torch.tensor([0.4915, 0.4823, 0.4468])
std = torch.tensor([0.2470, 0.2435, 0.2616])

normalize = transforms.Normalize(mean.tolist(), std.tolist()) 

unnormalize = transforms.Normalize((-mean / std).tolist(), (1.0 / std).tolist())
img_unn = unnormalize(img)

plt.imshow(img_unn.permute(1, 2, 0))
plt.show()

Anyway I hate the fact that we don’t have un-normalize (de-normalize), it would be quite helpful for regression neural networks too.

Here is some working code as this is still one of the first Google results in inverse normalization. It follows torchvision.transforms convention so you can compose it.

Code Gist

class UnNormalize(torchvision.transforms.Normalize):
    def __init__(self,mean,std,*args,**kwargs):
        new_mean = [-m/s for m,s in zip(mean,std)]
        new_std = [1/s for s in std]
        super().__init__(new_mean, new_std, *args, **kwargs)
1 Like