Hello,
I am trying to use the a sequence of transforms in a dataloader of images I am loading. The order is
RandomRotation(degrees=(-30, 30), resample=False, expand=False)
RandomCrop(size=(246, 246), padding=None)
RandomHorizontalFlip(p=0.5)
Resize(size=(128, 128), interpolation=PIL.Image.NEAREST)
)
The problem is I want to use PIL.Image.BILINEAR for the resize. However, for everything except NEAREST, I am getting an error that hs the following stack trace:
File “/mnt/Analytics/InHouseProjects/Clustering/IIC_folder/cluster/data.py”, line 70, in getitem
image = self.transform(image)
File “/home/.local/share/virtualenvs/PyTorch-rjjGfeb_/lib/python3.7/site-packages/torchvision/transforms/transforms.py”, line 70, in call
img = t(img)
File “/home/.local/share/virtualenvs/PyTorch-rjjGfeb_/lib/python3.7/site-packages/torchvision/transforms/transforms.py”, line 207, in call
return F.resize(img, self.size, self.interpolation)
File “/home/.local/share/virtualenvs/PyTorch-rjjGfeb_/lib/python3.7/site-packages/torchvision/transforms/functional.py”, line 254, in resize
return img.resize((ow, oh), interpolation)
File “/home/.local/share/virtualenvs/PyTorch-rjjGfeb_/lib/python3.7/site-packages/PIL/Image.py”, line 1922, in resize
return self._new(self.im.resize(size, resample, box))
ValueError: image has wrong mode
On torchvision.transforms.resize but I am unable to use any kind of interpolation except nearest? Even using it as the only transform is throwing this error. The images as 16 bit tiff images that are cast to np.float32.
Thank you!