Image Normalization for pretrained network

Hi all,
I’m currently using alexnet pretrained network for my experiments. I need to preprocess the images performing normalization. I have images with values in the range [0,255]. I have divided by 255 and I have performed normalization using normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],std=[0.229, 0.224, 0.225]). However after the normalization the values are not in the range [0,1]. Should I compute for each image the mean and standard deviation and then subtracting the mean and divide by standard deviation?

thanks for your help

This is normal.

This is exactly what torchvision.transforms.Normalize does.

for example one popular thing to do would be, using torchvision.transforms.ToTensor to make your image of range [0,255] to a tensor of range [0,1] and then use torchvision.transforms.Normalize with mean and std of 0.5 over all channels. Since (0 - 0.5) / 0.5 = -1 and (1 - 0.5) / 0.5 =1, this would normalize your images in a range of [-1,1].

Since you are using a pretrained model you can also use the mean and std the data was normalized with when being trained. Since you are probably using alexnet pertained on imagenet, that would be the mean and std of the imagenet dataset.
Which is the one in your code: mean = [0.485, 0.456, 0.406], stds = [0.229, 0.224, 0.225]

1 Like