Invalid argument tensors for dataloader pytoch

I have made a dataset using pytoch dataloader and Imagefolder, my dataset class has two Imagefolder dataset. These two datasets are paired(original and ground truth image). I want to feed these to pytorch neural network. Dataset class:

class bsds_dataset(Dataset):
def __init__(self, ds_main, ds_energy):
    self.dataset1 = ds_main
    self.dataset2 = ds_energy

def __getitem__(self, index):
    x1 = self.dataset1[index]
    x2 = self.dataset2[index]

    return x1, x2

def __len__(self):
    return len(self.dataset1)

And I’m loading images with Imagefolder:

original_imagefolder = './images/whole'
target_imagefolder = './results/whole'

original_ds = ImageFolder(original_imagefolder, 
transform=transforms.ToTensor())
energy_ds = ImageFolder(target_imagefolder, transform=transforms.ToTensor())

dataset = bsds_dataset(original_ds, energy_ds)
loader = DataLoader(dataset, batch_size=16)

Then I tried to iterate in batches:

for i, x, y in enumerate(loader):
    print(x)

This error happened:

RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 321 and 481 in dimension 2 at ..\aten\src\TH/generic/THTensor.cpp:711

The dataset is BSDS500:
https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/resources.html

all images in the dataset are in 481x321 or 321x481 pixels. I think some transform needed but I don’t want to demolish images and stretch them.

Full traceback:

   C:\Anaconda3\envs\torchgpu\lib\site-packages\ipykernel_launcher.py:77: UserWarning: nn.init.xavier_normal is now deprecated in favor of nn.init.xavier_normal_.
C:\Anaconda3\envs\torchgpu\lib\site-packages\ipykernel_launcher.py:78: UserWarning: nn.init.constant is now deprecated in favor of nn.init.constant_.
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-42-4c4ba0a13c32> in <module>
      5 optimizer = optim.SGD(model.parameters(), lr=0.001)
      6 for epoch in range(epochs):
----> 7     for i, batch in enumerate(loader):
      8         print(batch)

C:\Anaconda3\envs\torchgpu\lib\site-packages\torch\utils\data\dataloader.py in __next__(self)
    558         if self.num_workers == 0:  # same-process loading
    559             indices = next(self.sample_iter)  # may raise StopIteration
--> 560             batch = self.collate_fn([self.dataset[i] for i in indices])
    561             if self.pin_memory:
    562                 batch = _utils.pin_memory.pin_memory_batch(batch)

C:\Anaconda3\envs\torchgpu\lib\site-packages\torch\utils\data\_utils\collate.py in default_collate(batch)
     66     elif isinstance(batch[0], container_abcs.Sequence):
     67         transposed = zip(*batch)
---> 68         return [default_collate(samples) for samples in transposed]
     69 
     70     raise TypeError((error_msg_fmt.format(type(batch[0]))))

C:\Anaconda3\envs\torchgpu\lib\site-packages\torch\utils\data\_utils\collate.py in <listcomp>(.0)
     66     elif isinstance(batch[0], container_abcs.Sequence):
     67         transposed = zip(*batch)
---> 68         return [default_collate(samples) for samples in transposed]
     69 
     70     raise TypeError((error_msg_fmt.format(type(batch[0]))))

C:\Anaconda3\envs\torchgpu\lib\site-packages\torch\utils\data\_utils\collate.py in default_collate(batch)
     66     elif isinstance(batch[0], container_abcs.Sequence):
     67         transposed = zip(*batch)
---> 68         return [default_collate(samples) for samples in transposed]
     69 
     70     raise TypeError((error_msg_fmt.format(type(batch[0]))))

C:\Anaconda3\envs\torchgpu\lib\site-packages\torch\utils\data\_utils\collate.py in <listcomp>(.0)
     66     elif isinstance(batch[0], container_abcs.Sequence):
     67         transposed = zip(*batch)
---> 68         return [default_collate(samples) for samples in transposed]
     69 
     70     raise TypeError((error_msg_fmt.format(type(batch[0]))))

C:\Anaconda3\envs\torchgpu\lib\site-packages\torch\utils\data\_utils\collate.py in default_collate(batch)
     41             storage = batch[0].storage()._new_shared(numel)
     42             out = batch[0].new(storage)
---> 43         return torch.stack(batch, 0, out=out)
     44     elif elem_type.__module__ == 'numpy' and elem_type.__name__ != 'str_' \
     45             and elem_type.__name__ != 'string_':

RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 321 and 481 in dimension 2 at ..\aten\src\TH/generic/THTensor.cpp:711

Hi @Arta_A,

You can’t directly create a batch of samples with different sizes because the Dataloader try to collate/glue them with torch.stack (which requires “All tensors need to be of the same size.”).

The workaround consists in providing a custom collation function (through the Dataloader collate_fn arg) that will (e.g.) create the batch as a Python list (or anything you want) rather than a Tensor.

Please find a detailed example in this thread How to create a dataloader with variable-size input - #3 by jdhao.

I have a similar problem UserWarning: The number of elements in the out tensor of shape [1] is 1

Do you have any suggestions for me?