Hi everyone,
I’m trying to train a segmentation network and I would like to normalize images between 0 and 1. However the ToTensor transform I use outputs very low pixel values. Here’s a code sample
img_transforms = transforms.Compose([transforms.Grayscale(num_output_channels=1),
transforms.Resize((100,100), interpolation=2),
transforms.ToTensor()])
img = Image.open("myimg.png")
img = np.array(img).astype(float)
img*= (255.0/img.max())
img = img.astype(np.uint8)
print(img.max())
img = Image.fromarray(img)
img = img_transforms(img)
print(img.max())
And the given output:
255
tensor(0.1137)
I don’t understand why I don’t get 1 as a max value since the ToTensor function is supposed to output values between 0 and 1.
Could anyone shed some light on what may be occurring?
Thanks