# Does Pytorch automatically Normalizes Image to (0,1)?

I am new to Pytorch, I was just trying out some datasets. While using the `torchvision.transforms.Normalize` I noted that most of the example out there were using 0.5 as mean and std to normalize the images in range (-1,1) but this will only work if our image data is already in (0,1) form and when i tried out normalizing my data (using mean and std as 0.5) by myself, my data was converted to range (-1,1), this means that when I loaded the data it was converted into (0,1) some where in the code.
Am I right that Pytorch automatically Normalizes Image to (0,1) when we load it and if yes which line of code is doing this?

``````import torch
from torchvision import transforms, datasets
Transformations = transforms.Compose([transforms.RandomSizedCrop(224), transforms.ToTensor(),
transforms.Normalize(mean=[0.5, 0.5, 0.5],std=[0.5, 0.5, 0.5])
])
image_data = datasets.ImageFolder('C:/users/sharm/OneDrive/Desktop/Mnist/dogImages/train', transform = Transformations)
images, labels = iteration.next()
print (images)
``````

Thank you

2 Likes

Hello
Pytorch default backend for images are Pillow, and when you use `ToTensor()` class, PyTorch automatically converts all images into `[0,1]`.

Here is the source.

regards

8 Likes

I don’t think so that it automatically converts all image into [0,1]
have a look :-:

import numpy as np
from torchvision import transforms
a = np.full((3,3), 255)
print(a)
x = transforms.ToTensor()
y = x(a)
print(y)

[[255 255 255]
[255 255 255]
[255 255 255]]
tensor([[[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]])

Hi,

The issue is that numpy image is a byte/uint8 array and that is why there is conversion to ByteTensor in the source code I referenced.
The way you initialized your array, it’s `int64` dtype which is not image in the definitions of numpy or PIL.

To get desired result, you can convert `a` using `a = a.astype(np.uint8)`.

Bests