RuntimeError: stack expects each tensor to be equal size, but got [3, 1500, 2000] at entry 0 and [3, 1728, 2304] at entry 1

I am using custom dataset and doing training on UNet but getting an error.
Repo https://github.com/milesial/Pytorch-UNet
Error

Traceback (most recent call last):
  File "/home/khawar/Pytorch-UNet/train.py", line 186, in <module>
    val_percent=args.val / 100)
  File "/home/khawar/Pytorch-UNet/train.py", line 70, in train_net
    for batch in train_loader:
  File "/home/khawar/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 363, in __next__
    data = self._next_data()
  File "/home/khawar/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 989, in _next_data
    return self._process_data(data)
  File "/home/khawar/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 1014, in _process_data
    data.reraise()
  File "/home/khawar/.local/lib/python3.6/site-packages/torch/_utils.py", line 395, in reraise
    raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/home/khawar/.local/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 185, in _worker_loop
    data = fetcher.fetch(index)
  File "/home/khawar/.local/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
    return self.collate_fn(data)
  File "/home/khawar/.local/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 74, in default_collate
    return {key: default_collate([d[key] for d in batch]) for key in elem}
  File "/home/khawar/.local/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 74, in <dictcomp>
    return {key: default_collate([d[key] for d in batch]) for key in elem}
  File "/home/khawar/.local/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 55, in default_collate
    return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [3, 900, 1600] at entry 0 and [3, 1500, 2000] at entry 1

The error is telling you that you are trying to put images of different size (size 1: [3, 900, 1600], size 2: [3, 1500, 2000]) into one batch. This is not possible tho. The images need to be of the same size.

But my images are same in size. How to give same size image through programmatically?

I don’t know if you noticed, but you have asked about the exact same error message in a post 2 days ago.

RuntimeError: stack expects each tensor to be equal size, but got [3, 288, 352] at entry 0 and [3, 256, 256] at entry 1

And you have already gotten an answer to it.

This answer still applies here. The only difference is that you are now using a custom dataset.

If you want a specific answer for your custom dataset:
In utils/dataset.py you resize images based on a constant scaling factor instead of a fixed size:

    def preprocess(cls, pil_img, scale):
        w, h = pil_img.size
        newW, newH = int(scale * w), int(scale * h)
        assert newW > 0 and newH > 0, 'Scale is too small'
        pil_img = pil_img.resize((newW, newH))

I am guessing that the images you are using are not all the same size. If you then preprocess them by simply scaling each of them with a constant factor the will stay differently sized.
You either need images of the same size for this scaling to work or you resize every image to a given size. Which is also what was suggested to you in the above mentioned previous post you did about this topic.

1 Like