PIL image normalization

Hi everyone :slight_smile:

I am currently working on a CNN project and after looking at the PyTorch tutorial (https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#) I have some questinos regarding normalization.

The tutorial mentions that since the PIL images are in range [0,1], they use a mean and standard deviation of 0.5 to bring them into a range of [-1,1]. I know that this is common practice (as is calculating the actual mean and standard deviation or keeping them in range [0,1]). But: not using the actual mean and standard deviation will not result in an overall mean of 0 and variance 1 as often desired, right?

Also: why bring the images in a range of [-1,1] in the first place? Does anyone have a paper or book explaining this (I need a reference for my project :wink: )? All I can find is people saying that it’s common practice…

Any help is very much appreciated!

All the best

Yes, but the propsed method might still work “good enough” for the tutorial. :wink:

I think you should be able to find a proper explaination in Goodfellow et al., Deep Learning and I’m sure Bishop, Pattern Recognition and Machine Learning explains it as well (which is also a general recommendation to take a look at).

1 Like

@ptrblck Thank you so much! :slight_smile: