Transforms.resize() the value of the resized PIL image

Hi, I find that after I use the transforms.resize() the value range of the resized image changes.

a = torch.randint(0,255,(500,500), dtype=torch.uint8)
print(a.size())
print(torch.max(a))
a = torch.unsqueeze(a, dim =0)
print(a.size())
compose = transforms.Compose([transforms.ToPILImage(),transforms.Resize((128,128))])
a_trans = compose(a)
print(a_trans.size)
print(a_trans.getextrema())

The result:

torch.Size([500, 500])
tensor(254, dtype=torch.uint8)
torch.Size([1, 500, 500])
(128, 128)
(79, 179)

The original range is [0,255], after the transforms.resize(), the value range change to [79,179]

I want to do the resize without the change of value range, someone could help? Thank you

1 Like

The problem is solved, the default algorithm for torch.transforms.resize() is BILINEAR
SO just set
transforms.Resize((128,128),interpolation=Image.NEAREST)
Then the value range won’t change!

7 Likes

@Xiaoyu_Song,

did you get this error?

UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.

It still trains, not too sure what does this error mean.

This warning points to this PR, which seems to have introduced the InterpolationMode argument, since torchvision.transforms are supporting PIL.Images as well as tensors now (at least the majority, if I’m not mistaken).

CC @vfdev-5 to correct me.

1 Like

Can you please see this: Regarding transforms.resize and drastic changes in accuracy, I have the same question regarding which is better or a preferred way to resize an image. Thanks in advance, Sriram Na.