Why do images look weird after (Imagenet) normalization?

Hey,
I am using a pretrained network and wanted to normalize my images according to the ImageNet.
For some reason however the images look really weird after the normalization.

Can someone explain to me why this is happening?


(with normalization)


(without normalization)

Here is the code of the transform (I build my own):

import torchvision.transforms.functional as TF

class my_transform:
    def __init__(self):
        #my_parameters

    def __call__(self, img, label=False):
        #(some other transforms like Horizontal flip)
        img = TF.to_tensor(img)
        if not label:
            img = TF.normalize(img, mean=[0.485, 0.456, 0.406],
                               std=[0.229, 0.224, 0.225])
        return img

any help is much appreciated!

btw normalizing that way doesn’t even ensure a proper format to visualize them…
You need to make them to be between 0-1 or 0-255

right before the normalization at:

img = TF.to_tensor(img)

the values are in range [0-1] but after they are >1.

So you mean the normalization works and is just for the network input and not meant to be visualized directly?

If I want to visualize it I need to re-normalize it back to 0-1?

If so, how do I do that? :smiley:

You can just map values to 0-1 in a linear way. https://stackoverflow.com/questions/4154969/how-to-map-numbers-in-range-099-to-range-1-01-0

1 Like

Thanks man!
Worked perfectly.

In case anyone else comes across this issue:

    def renormalize(self, tensor):
        minFrom= tensor.min()
        maxFrom= tensor.max()
        minTo = 0
        maxTo=1
        return minTo + (maxTo - minTo) * ((tensor - minFrom) / (maxFrom - minFrom))

this code worked for me

2 Likes

Hi,
As you are using custom mean and std, I think the proper approach of unnormalization would be to unnormalize using those mean and stds. Please see this thread: Simple way to inverse transform ? Normalization

Bests

1 Like

Thank you! Tried it out and the output looks like a normal picture…

I will definitely keep a solution from the post you send, thanks for that! :slight_smile:

One think that still confuses me now is which tensor should be fed into the network?

# bring the values back to a range of [0-1]
invTrans = T.Normalize(
                mean=[-0.485 / 0.229, -0.456 / 0.224, -0.406 / 0.225],
                std=[1 / 0.229, 1 / 0.224, 1 / 0.225]
            )

img = TF.normalize(img, mean=[0.485, 0.456, 0.406],
                               std=[0.229, 0.224, 0.225])
            
img2 = invTrans(img)

img without range [0-1] or img2 with range [0-1]?

Hi,
For me the point is he want to visualize the images after normalizing. It’s not possible due to the format they use but you can keep the normalization and remap it.

Unnormalize is indeed returning the original image.

Yes, the reason that I suggested that approach is that it will give the statistics similar original images in the dataset, meanwhile it might not be necessary for visualization purposes.

If you want to just scale your input in range of [0, 1], transforms.ToTensor will do the job. If you need to compute z-score you can use transforms.Normalize.
As you have mentioned you are using pretrained network, you may want to normalize them. Unnormalization is used for viewing the transformed images or when your model predicts an image which usually is normalized output.

Just note that when you are using transforms.Normalize, you are using z-score, so the output will not be in range [0, 1]. It depends on the mean and std but if you set mean and std = 0.5 for all channels it will be in [-1, 1], otherwise, out of this range.

You can find the proper input from the implementation of pretrained model, but usually, you will find normalization based on ImageNet values.

1 Like

It works for me, thanks a lot