Error when transforming an image during prediction

Hello. I deployed a trained model via flask on heroku. I noticed my predict method keeps failing at the point of applying a transform to an input image.

This is the error message

RuntimeError: The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 0

This is the predict method

def transform_image(infile):
    my_transforms = transforms.Compose([transforms.Resize(256),
                                        transforms.CenterCrop(224),
                                        transforms.ToTensor(),
                                        transforms.Normalize(
                                            [0.485, 0.456, 0.406],
                                            [0.229, 0.224, 0.225])])
    image = Image.open(infile)
    timg = my_transforms(image)
    timg.unsqueeze_(0)
    return timg

It fails at

timg = my_transforms(image)

The stack trace

File “/app/application.py”, line 30, in transform_image
2020-06-14T10:18:45.253206+00:00 app[web.1]: timg = my_transforms(image)
2020-06-14T10:18:45.253206+00:00 app[web.1]: File “/app/.heroku/python/lib/python3.7/site-packages/torchvision/transforms/transforms.py”, line 61, in call
2020-06-14T10:18:45.253206+00:00 app[web.1]: img = t(img)
2020-06-14T10:18:45.253207+00:00 app[web.1]: File “/app/.heroku/python/lib/python3.7/site-packages/torchvision/transforms/transforms.py”, line 166, in call
2020-06-14T10:18:45.253207+00:00 app[web.1]: return F.normalize(tensor, self.mean, self.std, self.inplace)
2020-06-14T10:18:45.253208+00:00 app[web.1]: File “/app/.heroku/python/lib/python3.7/site-packages/torchvision/transforms/functional.py”, line 208, in normalize
2020-06-14T10:18:45.253208+00:00 app[web.1]: tensor.sub_(mean).div_(std)

This error might be raised, if your input tensor has 4 channels, while the Normalize transformation expects a 3 channel image. This might happen, if some of your input channels contain an alpha channel.
After loading the PIL.Image, you could convert it to RGB via image = image.convert('RGB').

1 Like

@ptrblck Thank you. That solved it. I am a newbie to pytorch. Why should that happen? I used a similar image to the image I trained the model with.

I also used the same transformer when loading my dataset during training

This shouldn’t happen, if you are using the same images, which were working before.
However, if you switched e.g. from the training to the validation dataset (or any other dataset), some images might have been stored with an alpha channel, which is usually not needed.