Not Normal Normalization?

I am having issues that are almost certainly due to values being out of range of what is expected.

I computed the mean and std for the dataset and used them for training:

normalize = transforms.Normalize(mean=[0.90346843004226, 0.488738864660263],
							      std=[0.10251837968826, 0.247761115431785])

However, based on a random sample taken, that doesn’t seem correct. For instance a random sample taken from the 2nd channel is:





(sample - mean) / std


(0.3415 - 0.488738864660263) / 0.247761115431785 = -0.5942775338400

I was under the impression that the standard practice with PyTorch normalization is to use the range of 0.0 to 1.0. Could -0.5942775338400 be correct?

How can I check if my mean and standard are correct?

This is how I calculated the mean and standard:

for i, (images, target) in enumerate(train_loader):
	images =
	for image in images:
		image = image.squeeze()
		channel_mean[0] += torch.sum(torch.mean(image[0, :, :]).to('cpu'))
		channel_mean[1] += torch.sum(torch.mean(image[1, :, :]).to('cpu'))		
		channel_std[0] += torch.std(image[0, :, :]).sum(0).item()
		channel_std[1] += torch.std(image[1, :, :]).sum(0).item()
		nb_samples += 1
channel_mean /= nb_samples
channel_std /= nb_samples

Hi Jeshua!

Yes, this is what Normalize does (and how it is supposed to work).

Not necessarily.

If your input values were approximately uniformly distributed over
the range 100 to 200, you might choose to transform them so that
they would lie in the range 0 to 1.

But if your values were approximately from a Gaussian distribution
with a mean of 150 and a standard deviation of 50, you might
choose to normalize them so they had a mean of 0 and a standard
deviation of 1.

Yes. After normalization (to a mean of 0 and standard deviation of 1),
one of your sample values is 0.59 standard deviations away from the
mean. This is perfectly reasonable, and, indeed, to be expected.


K. Frank

Thanks for the explanation Frank. With your vote of confidence I was able to get the network working. Thanks again!