I am currently working on a CNN project and after looking at the PyTorch tutorial (https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#) I have some questinos regarding normalization.
The tutorial mentions that since the PIL images are in range [0,1], they use a mean and standard deviation of 0.5 to bring them into a range of [-1,1]. I know that this is common practice (as is calculating the actual mean and standard deviation or keeping them in range [0,1]). But: not using the actual mean and standard deviation will not result in an overall mean of 0 and variance 1 as often desired, right?
Also: why bring the images in a range of [-1,1] in the first place? Does anyone have a paper or book explaining this (I need a reference for my project )? All I can find is people saying that it’s common practice…
Any help is very much appreciated!
All the best