Understanding transform.Normalize( )

Hi,
In my shallow view, normalization and scale are two different data preprocessing.
Scale is used to scale your data to [0, 1]
But normalization is to normalize your data distribution for training easily.

import torchvision.transforms.functional as TF
image = torch.randint(0,255,(5, 5, 3), dtype=torch.uint8)
scaled_image = TF.to_tensor(np.asarray(image))
output:
tensor([[[0.2078, 0.3765, 0.9451],
         [0.2039, 0.3961, 0.5176],
         [0.2588, 0.5333, 0.2039]],

        [[0.0941, 0.8980, 0.6745],
         [0.2431, 0.7451, 0.1255],
         [0.5412, 0.4667, 0.2471]],

        [[0.2000, 0.8588, 0.6902],
         [0.1137, 0.1255, 0.2000],
         [0.6863, 0.2392, 0.2118]]])
normalized_image = TF.normalize(image, mean, var)
output:
tensor([[[-0.5843, -0.2471,  0.8902],
         [-0.5922, -0.2078,  0.0353],
         [-0.4824,  0.0667, -0.5922]],

        [[-0.8118,  0.7961,  0.3490],
         [-0.5137,  0.4902, -0.7490],
         [ 0.0824, -0.0667, -0.5059]],

        [[-0.6000,  0.7176,  0.3804],
         [-0.7725, -0.7490, -0.6000],
         [ 0.3725, -0.5216, -0.5765]]])

If I am wrong, please correct me.
Thanks in advance.

9 Likes