Re-normalizing images

Hi!
How would you recommend to do the un-normalization of imagenet images, when:
transforms.Normalize([0.485 , 0.546, 0.406], [0.229 , 0.224 , 0.225]) is used.
the goal is to get back a numpy array [0,1].
Thanks a lot!

So Normalize does x_norm=(x-mean)/std. If you want to undo that, you could do x = x_norm * std[None, :, None, None] + mean[None, :, None, None] (the indexing aligning the channel dimension) or if you prefer magic, reframe it as (leaving out the indexing) x = (x_norm + (mean / std)) * std and then you could pass -mean/std and 1/std as (pre-computed) parameters to normalize.

Best regards

Thomas

Thank you Tom!
How would you compute the statistics of the normalized images? We should do it per channel, Right?

How the normalize will work on tensors?

Thank you!

Hi,

Yes we compute the mean and std channel wise.

And about computing these values, I have used this code gathered and edited from community for online method. Actually, in my case, my data was about 20GB and I could not load whole dataset into memory so I needed to compute std and mean batch wise and then accumulate it over all batches in epoch.

Note that this approach is not accurate but between two implemented approaches, strong gives more accurate answer in respect of longer run time.
If you can load entire dataset into memory, you do not need any approximation and the approach would be different.

Ref: About Normalization using pre-trained vgg16 networks

Bests
Nik

Thanks a lot!
The data is Imagenet, just want to plot the image correctly and no matter which method I take, the images are not in the wanted range.

Hm, for me:

n = transforms.Normalize([0.485 , 0.546, 0.406], [0.229 , 0.224 , 0.225])
n_inv = transforms.Normalize([-0.485/0.229, -0.546/0.224, -0.406/0.225], [1/0.229, 1/0.224, 1/0.225])
a = torch.randn(3,4,5) # works with real images, too
(n_inv(n(a))-a).abs().max()

gives 3e-7ish, so it is inverting up to numerical precision.

Best regards

Thomas

I don’t know if this helps you, but remember that you can backpropagate through a transforms.Normalize. That is, if you are currently giving a normalized image to an optimizer as a parameter you can give the original, unnormalized image instead.

Thank you!
That’s interesting, can you elaborate? Maybe to share a short code?
Thanks a lot!