I am new to Pytorch, I was just trying out some datasets. While using the torchvision.transforms.Normalize I noted that most of the example out there were using 0.5 as mean and std to normalize the images in range (-1,1) but this will only work if our image data is already in (0,1) form and when i tried out normalizing my data (using mean and std as 0.5) by myself, my data was converted to range (-1,1), this means that when I loaded the data it was converted into (0,1) some where in the code.
Am I right that Pytorch automatically Normalizes Image to (0,1) when we load it and if yes which line of code is doing this?
The issue is that numpy image is a byte/uint8 array and that is why there is conversion to ByteTensor in the source code I referenced.
The way you initialized your array, it’s int64 dtype which is not image in the definitions of numpy or PIL.
To get desired result, you can convert a using a = a.astype(np.uint8).
You could replace Normalize with a custom scaling dividing the data by its max. value or just use ToTensor which will already create tensors in this range.