I am having issues that are almost certainly due to values being out of range of what is expected.
I computed the mean and std for the dataset and used them for training:
normalize = transforms.Normalize(mean=[0.90346843004226, 0.488738864660263], std=[0.10251837968826, 0.247761115431785])
However, based on a random sample taken, that doesn’t seem correct. For instance a random sample taken from the 2nd channel is:
(sample - mean) / std
(0.3415 - 0.488738864660263) / 0.247761115431785 = -0.5942775338400
I was under the impression that the standard practice with PyTorch normalization is to use the range of 0.0 to 1.0. Could -0.5942775338400 be correct?
How can I check if my mean and standard are correct?
This is how I calculated the mean and standard:
for i, (images, target) in enumerate(train_loader): images = images.to(device) for image in images: image = image.squeeze() channel_mean += torch.sum(torch.mean(image[0, :, :]).to('cpu')) channel_mean += torch.sum(torch.mean(image[1, :, :]).to('cpu')) channel_std += torch.std(image[0, :, :]).sum(0).item() channel_std += torch.std(image[1, :, :]).sum(0).item() nb_samples += 1 channel_mean /= nb_samples channel_std /= nb_samples