Image normalization data range

Hi, I would like to normalize the image data to the range [-1,1], Here is the code I used to do the transform.

compose_T1 = transforms.Compose([transforms.ToPILImage(),
                                         transforms.Resize((128,128),interpolation=Image.NEAREST),
                                         transforms.ToTensor(),
                                         transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])

But after the transform, I find my data is in the range [-1,0.3412], checking the original data, I find out my orignal data range is [0,172], in this case, should I do the preprocess using data/172*255 in order to get the final data range [-1,1]?

It depends on how you read the image. Typically, torch dataset classes read the image and store in the range 0-1, hence using mean 0.5, std 0.5 makes the data in the range -1, 1. If RGB image from your dataset is with range 0-255, you can either divide it by 255 to keep it in consensus with typical torch approach or use mean 128, std 127.

About using a specific value 172, you may need to consider a larger use case rather than just the dataset that you have. If it’s RGB, ofcourse there could be test images with range more than 172.

Hope it helps.