Trying to use the transform torchvision.transforms.resize but getting image has wrong mode error?


I am trying to use the a sequence of transforms in a dataloader of images I am loading. The order is
RandomRotation(degrees=(-30, 30), resample=False, expand=False)
RandomCrop(size=(246, 246), padding=None)
Resize(size=(128, 128), interpolation=PIL.Image.NEAREST)

The problem is I want to use PIL.Image.BILINEAR for the resize. However, for everything except NEAREST, I am getting an error that hs the following stack trace:

File “/mnt/Analytics/InHouseProjects/Clustering/IIC_folder/cluster/”, line 70, in getitem
image = self.transform(image)
File “/home/.local/share/virtualenvs/PyTorch-rjjGfeb_/lib/python3.7/site-packages/torchvision/transforms/”, line 70, in call
img = t(img)
File “/home/.local/share/virtualenvs/PyTorch-rjjGfeb_/lib/python3.7/site-packages/torchvision/transforms/”, line 207, in call
return F.resize(img, self.size, self.interpolation)
File “/home/.local/share/virtualenvs/PyTorch-rjjGfeb_/lib/python3.7/site-packages/torchvision/transforms/”, line 254, in resize
return img.resize((ow, oh), interpolation)
File “/home/.local/share/virtualenvs/PyTorch-rjjGfeb_/lib/python3.7/site-packages/PIL/”, line 1922, in resize
return self._new(, resample, box))
ValueError: image has wrong mode

On torchvision.transforms.resize but I am unable to use any kind of interpolation except nearest? Even using it as the only transform is throwing this error. The images as 16 bit tiff images that are cast to np.float32.
Thank you!

The error is raised by PIL, which doesn’t seem to support the BILINEAR resize option for this type of image.
From their docs:

resample – An optional resampling filter. This can be one of PIL.Image.NEAREST, PIL.Image.BOX, PIL.Image.BILINEAR, PIL.Image.HAMMING, PIL.Image.BICUBIC or PIL.Image.LANCZOS. Default filter is PIL.Image.BICUBIC. If the image has mode “1” or “P”, it is always set to PIL.Image.NEAREST. See: Filters.