I try to rotate an image using torchvision.transforms.functional.rotate and I get this error:
File "C:\Users\User\Documents\programming\proj\utils.py", line 55, in __call__
image = torchvision.transforms.functional.rotate(PIL_image, angle)
File "C:\Users\User\Anaconda3\envs\surv\lib\site-packages\torchvision\transforms\functional.py", line 729, in rotate
return img.rotate(angle, resample, expand, center, fillcolor=fill)
File "C:\Users\User\Anaconda3\envs\surv\lib\site-packages\PIL\Image.py", line 1915, in rotate
fillcolor=fillcolor)
File "C:\Users\User\Anaconda3\envs\surv\lib\site-packages\PIL\Image.py", line 2205, in transform
im = new(self.mode, size, fillcolor)
File "C:\Users\User\Anaconda3\envs\surv\lib\site-packages\PIL\Image.py", line 2375, in new
return Image()._new(core.fill(mode, size, color))
TypeError: must be real number, not tuple
If I do PIL_image.rotate(angle) it works (using PIL’s rotation utility), but for some reason using pytorch’s torchvision.transforms.functional.rotate gives an error. How should I resolve this?
I believe this occurs when your image is mode = F (floating point) and thus only has a single channel. In this case color is expected to be a single number and not a tuple. Pytorch however sees floating point numbers and passes a tuple of one item downstream to color and hence the error from PIL. Not sure if it’s a bug or a feature.
I’m getting a similar issue as well! My model was working perfectly a few days ago, and I don’t think I’ve changed anything, but I’m seeing the following when I try to run the model:
/usr/lib/python3/dist-packages/PIL/ImageOps.py in expand(image, border, fill)
360 width = left + image.size[0] + right
361 height = top + image.size[1] + bottom
–> 362 out = Image.new(image.mode, (width, height), _color(fill, image.mode))
363 out.paste(image, (left, top))
364 return out
Could this be a new bug? I’m processing .tif images and each of the images are going through the following transform:
def transform_function(degrees,scale,flip_prob):
transform_list.append(transforms.RandomAffine(degrees, scale = scale))
transform_list.append(transforms.RandomHorizontalFlip(p=flip_prob))
transform_list.append(transforms.Pad(37)) #all images should be 182x182 before padding.
transform_list.append(transforms.ToTensor())
Without altering any of my code, it’s now working again! The torch and torchvision versions that are available as of October 22nd, 2020 (with corresponding CUDA version) were causing my issues, which arose from the RandomAffine(degrees, scale = scale) operation (inputs below).