RuntimeError: stack expects each tensor to be equal size, but got [3, 224, 224] at entry 0 and [3, 224, 336] at entry 3

Okay, but isn’t it odd. I already included the transforms.Resize() in the transforms.Compose() as seen here

# define transforms
    if augment:
        train_transform = transforms.Compose([
            transforms.Resize(img_size),
            transforms.RandomHorizontalFlip(0.3),
            transforms.ToTensor(),
            normalize,
        ])
    else:
        train_transform = transforms.Compose([
            transforms.Resize(img_size),
            transforms.ToTensor(),
            normalize,
        ])

    # load the dataset
    train_dataset = datasets.ImageFolder(
        root=train_dir,
        transform=torchvision.transforms.Compose([
            transforms.Resize(img_size),
            transforms.RandomHorizontalFlip(0.3),
            transforms.ToTensor()])
    )

    valid_dataset = train_dataset = datasets.ImageFolder(
        root=train_dir,
        transform=transforms.Compose([
            transforms.Resize(img_size),
            transforms.ToTensor(),
            normalize,])
        )

I even checked the image size as i defined the batch_size to be 64 and img_size to be 224

IN: trainimages, trainlabels = next(iter(train_loader))
IN: trainimages.shape
OUT: torch.Size([64, 3, 224, 224])
1 Like

So you mean to say, even if the transforms.Resize() is included in transforms.Compose(), there is still an error of size mismatch ?

2 Likes

Yup! That’s why I asked If i should include another transform right before passing the images to the model

1 Like

Hey,I am getting the same error as @Flint even after doing transforms.Resize(). Did you solve the error?

1 Like

Unfortunately no, have you thought of a solution yet?

Note that Resize will behave differently on input images with a different height and width.
From the docs:

size ( sequence or int ) – Desired output size. If size is a sequence like (h, w), output size will be matched to this. If size is an int, smaller edge of the image will be matched to this number. i.e, if height > width, then image will be rescaled to (size * height / width, size)

If you are dealing with such images, pass the size argument as a tuple:

transforms.Resize((img_size, img_size))

CC @pr6dA

30 Likes
Traceback (most recent call last):
  File "main/train.py", line 40, in <module>
    trainer.train(epoch)
  File "/home/redarknight/projects/p2s/main/../lib/core/base.py", line 151, in train
    for i, (img_joint, gt_mesh, gt_h36m_joint, gt_coco_joint, part_seg) in enumerate(batch_generator):
  File "/home/redarknight/anaconda3/envs/pytorch/lib/python3.7/site-packages/tqdm/std.py", line 1097, in __iter__
    for obj in iterable:
  File "/home/redarknight/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 345, in __next__
    data = self._next_data()
  File "/home/redarknight/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 838, in _next_data
    return self._process_data(data)
  File "/home/redarknight/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 881, in _process_data
    data.reraise()
  File "/home/redarknight/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/_utils.py", line 395, in reraise
    raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in DataLoader worker process 11.
Original Traceback (most recent call last):
  File "/home/redarknight/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
    data = fetcher.fetch(index)
  File "/home/redarknight/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
    return self.collate_fn(data)
  File "/home/redarknight/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 79, in default_collate
    return [default_collate(samples) for samples in transposed]
  File "/home/redarknight/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 79, in <listcomp>
    return [default_collate(samples) for samples in transposed]
  File "/home/redarknight/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 64, in default_collate
    return default_collate([torch.as_tensor(b) for b in batch])
  File "/home/redarknight/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 55, in default_collate
    return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [16, 192, 144] at entry 0 and [14, 192, 144] at entry 45

Having similar error now and it has nothing to do with ‘Resize’.
I changed the input returned in getitem() from a numpy array with shape [14, 192, 144] to an array with [16, 192, 144].
Now strangely, the dataloader outputs the above error

1 Like

How did you change this shape? Are you creating these arrays inside the __getitem__ method or are you indexing/slicing them? In the latter case, could the “last” slice be smaller?

Is there a way to have batch with tensors of different size in it? Like entry 0 : [4, 475, 320] and entry
1: [4, 256, 256]
Because I think that fully convolutional network like UNet can handle different shape of input so I thought it would be a good idea to give different shape of input for the training?

You would have to pad or resize the tensors to create a single batch of tensors. There is an ongoing effort to implement nested tensors, which would support variable shaped tensors, but I’m unsure in which state it is at the moment.

2 Likes

Great thank you very much!

Hi @ptrblck I’m getting this
I read the file and resize it using cv2
RuntimeError: stack expects each tensor to be equal size, but got [256, 256, 3] at entry 0 and [256, 256] at entry 1

Based on the error message it seems that the second image tensor is a grayscale image (single channel), while the first one contains 3 channels.
You could either transform both images to grayscale or RGB to create matching shapes.

1 Like

Thank you so much sir

HI @ptrblck, I am working on CNN to make high-scale images from the low-scale images. Now I have tons of images of different sizes. Can I make a batch with different sizes tensor? Because I am getting the same error.

You can create a “batch” of tensors with different shapes by using e.g. a list (and a custom collate_fn in the DataLoader). However, you won’t be able to pass this list of tensors to the model directly and would either have to pass them one by one or create a single tensor after cropping/padding the tensors.
I don’t know how far the implementation of nested tensors is, but this utility would allow you to use a tensor object containing differently shaped tensors internally.

Thank you so much! I was stuck at this error for so long.

Hi everyone, I’m new to pytorch and I have the same problem because I have rectangular images of different sizes.
So if I understood well the DataLoader does not natively support images of different shapes, is that correct?
If so, is there a workaround to handle such cases?

Thanks in advance

Yes, the default collate_fn used in the DataLoader tries to torch.stack the inputs and will fail, if the samples have a different shape. A fix would be to write a custom collate_fn and return the samples in e.g. a list. Note that while this would fix the creation of the batch in the DataLoader, your model would most likely not be able to use the list as an input and you would then need to pass each sample separately, resize it etc.

2 Likes

Thanks for this precise response. This worked