Normalization in the mnist example

By normalizing the input, SGD algorithm will work better. If the feature scale is not approximately the same, it will takes longer time to find the minimum.

@jdhao, I wasn’t talking about the scaling, I was talking about the bias term.
Moreover, in the case of images all pixels are within the same range so stuff like normalizing different features units doesn’t apply here.

Put my question differently, after this “Centering” does the Bias of the first layer filter is around 0?

Training is more stable and faster when parameters are small. As a fact, none of these first order optimization method guarantees finding minimum for arbitrary network (in fact, they can’t even find it for the simple ones). Therefore, although scaling & offsetting is equivalent to scaling the weights and offsetting bias at first linear layer, normalization proves to often give better results.

Moreover, you shouldn’t normalize using every pixel’s mean and std. Since conv is an operation on channels, you should just use each channel’s mean and std.

2 Likes

Do we need tensors to be in the range of [-1,1] or is [0,1] okay? I have my own dataset of RGB images with a range of [0,1]. I manually normalized the dataset but the tensors are still in the range of [0,1]. What is the benefit of transforming the range to [-1,1]?

1 Like

@lkins, @smth
why you guys said [-1,1]? From the document, I just see [0,1]
http://pytorch.org/docs/master/torchvision/transforms.html

class torchvision.transforms.ToTensor

Converts a PIL Image or numpy.ndarray (H x W x C) in the range [0, 255] to a torch.FloatTensor of shape (C x H x W) in the range [0.0, 1.0].

So if I do the normalization on each channel by myself, to convert [a,b] to [0,1], I don’t need transforms.ToTensor anymore, right?

But what if my data has a different range of each channel, such as x: -10 ~ 10, y: 1 -100, z: 20 -25 (actually they have some hidden correlation between each other), how to normalization? It doesn’t make sense to normalize them to the same range.

So the imagenet’s parameter
mean=[0.485, 0.456, 0.406],std=[0.229, 0.224, 0.225]
can also be used for cifar10 dataset’s normalization?

This normalization can also applied on one channel image (gray image)?

I do not think so, for gray image, maybe you can just use 0.5 for its mean and 0.5 for its std.

Is it necessary to normalize the data? I’m just curious about two cases:

  1. If you don’t normalize the data
  2. If you don’t know the mean and std and just use 0.5 for all values.
3 Likes

Can you please these explanations as probably a footnote in the tutorials? In its current form it seems too intimidating to see constants popping without proper explanation. Great work BTW.

1 Like

@smth Why should they be in [-1, 1] range? How does that help the network?

I get why the input has to be normalized, but if the values are between 0 and 1 isn’t that already considered normalized? Why -1 and 1?

I guess that depends on the activation function(s) used. If you are using Sigmoid, then you are better off with [0, 1] normalization, else if you are using Tan-Sigmoid then [-1, 1] normalization will do. The normalization might, in many occasions, affect the time your network needs to converge; as the synaptic weights will adapt to the situation with time.

3 Likes

To anybody looking for a more universal solution for custom datasets, this is what worked for me:

# Note: data type must be numpy.ndarray
# example of data shape: (50000, 32, 32, 3). Channel is last dimension
data = training_set.data
# find mean and std for each channel, then put it in the range 0..1
mean = np.round(data.mean(axis=(0,1,2))/255,4)
std = np.round(data.std(axis=(0,1,2))/255,4)
print(f"mean: {mean}\nstd: {std}")
1 Like

Thanks for the explanation

    train_transform = transforms.Compose([transforms.ToTensor()])
    train_set = torchvision.datasets.MNIST(root=data_dir, train=True, download=True, transform=train_transform)
    print("min:%f max:%f" %(train_set.data.min(), train_set.data.max())) #0,255

As we know, transforms.ToTensor() is to let be in [0,1], why the above result’s maximum is 255.

I feel very confused. Can anyone can help me? Thank you in advance

You are directly indexing the internal .data attribute which contains the entire unprocessed samples.
If you want to apply the transformations you would need to index ir iterate the train_set e.g. via train_set[0].min().

Thank you so much for your explanation. It solved my confuse

Is (0.1307,) the same as [0.1307] or [0.1307, 0.1307, 0.1307]? Thanks.