RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 3 and 2 in dimension 1

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/hu/.local/lib/python3.6/site-packages/visdom/__init__.py", line 446, in _send
    data=json.dumps(msg),
  File "/home/hu/.local/lib/python3.6/site-packages/requests/api.py", line 112, in post
    return request('post', url, data=data, json=json, **kwargs)
  File "/home/hu/.local/lib/python3.6/site-packages/requests/api.py", line 58, in request
    return session.request(method=method, url=url, **kwargs)
  File "/home/hu/.local/lib/python3.6/site-packages/requests/sessions.py", line 512, in request
    resp = self.send(prep, **send_kwargs)
  File "/home/hu/.local/lib/python3.6/site-packages/requests/sessions.py", line 622, in send
    r = adapter.send(request, **kwargs)
  File "/home/hu/.local/lib/python3.6/site-packages/requests/adapters.py", line 513, in send
    raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /events (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f92289e3358>: Failed to establish a new connection: [Errno 111] Connection refused',))
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=128x192 at 0x7F92289A44A8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=128x192 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A45F8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A44A8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=128x192 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A45F8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A44A8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A44A8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=128x192 at 0x7F92289A45F8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=128x192 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=128x192 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=128x192 at 0x7F92289A44A8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=128x192 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=128x192 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A44A8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=128x192 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A44A8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A44A8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=128x192 at 0x7F92289A45F8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=128x192 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=128x192 at 0x7F92289A44A8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=128x192 at 0x7F92289A45F8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A44A8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=128x192 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A45F8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=128x192 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=128x192 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A44A8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A45F8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=128x192 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A44A8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=128x192 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A45F8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A44A8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A45F8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A44A8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A45F8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A44A8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A45F8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A44A8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A45F8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=128x192 at 0x7F92289A44A8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=128x192 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A45F8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A44A8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=128x192 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A45F8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A44A8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=128x192 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=128x192 at 0x7F92289A45F8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=128x192 at 0x7F92289A44A8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=128x192 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=128x192 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A45F8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=128x192 at 0x7F92289A44A8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A45F8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A45F8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A44A8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4518> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=128x192 at 0x7F92289A45F8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A44A8> torch.Size([3, 224, 224])
<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=192x128 at 0x7F92289A4588> torch.Size([3, 224, 224])
Traceback (most recent call last):
  File "/home/hu/下载/Corel5k (3).py", line 217, in <module>
    for i, (input, target) in  enumerate(testloader):
  File "/home/hu/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 286, in __next__
    return self._process_next_batch(batch)
  File "/home/hu/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 307, in _process_next_batch
    raise batch.exc_type(batch.exc_msg)
RuntimeError: Traceback (most recent call last):
  File "/home/hu/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 57, in _worker_loop
    samples = collate_fn([dataset[i] for i in batch_indices])
  File "/home/hu/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 138, in default_collate
    return [default_collate(samples) for samples in transposed]
  File "/home/hu/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 138, in <listcomp>
    return [default_collate(samples) for samples in transposed]
  File "/home/hu/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 115, in default_collate
    return torch.stack(batch, 0, out=out)
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 4 and 3 in dimension 1 at /pytorch/aten/src/TH/generic/THTensorMath.c:3586

Here is the printed result.

I couldn’t figure out what exactly wrong so far.

Add this in your __getitem__() method. It might solve the issue.

image = Image.open(image).convert('RGB')

I will let you know if I get any pointers.

3 Likes

Ok , anyway , thanks a lot for your answer !

ps. After adding image = Image.open(image).convert('RGB') the error information didn’t change.

There is a mistake in your code?
you are using trainset in your testloader? Is it intended?

No , it’s my partner’s mistake .Thanks to your message , I’ve fixed it just now .

After some discussion , I think the reason is :

Normally , for one-label image , dataset return the label (type:list) , and dataLoader make it turn to tensor .

But for multi-label image , dataLoader can’t turn it to tensor , which cause a bug we meet before : type ‘list’ doesn’t have the attribute ‘to’. (the code that makes it is “target.to(device)”)

To fix that problem , we use torch.from_numpy to turn it to tensor in dataset

lableList = list(map(int, words))
			lableList = np.array(lableList)
			lableList = torch.from_numpy(lableList)
			imgs.append((imageList, lableList))
		self.imgs = imgs

And here is the problem . The number of our image’s labels is not certain (some pictures got 3 , some got 4, some 2),and that makes the length of the labeList is not certain .When it turn to tensor , the dimension 0 is not certain (some 3,some 2), and this is the reason that cause the error.

Is my thought right ? If so , how can I solve the problem caused by “number of labels is not certain” ?
Appreciate for reply!:slight_smile:

I see. You can use a custom collate function to get around this. Similar to:

Thanks for reply , however ,the quote seems not dealing with the problem we are facing .

We want to deal with the problem that the number of labels is not certain , but the quote takes care of the variable size of the images.

I tried to imitate it to write “my_collate” , but it didn’t work .

Yes. It is not dealing with your exact issue. I was hoping, it might give you some idea.
Please try-out the following collate function:

import torch
import numpy as np
import torch.utils.data as data

def my_collate(batch):
    data = torch.stack([item[0].unsqueeze(0) for item in batch], 0)
    target = torch.Tensor([item[1] for item in batch])
    return [data, target]

class dataset(data.Dataset):
    def __init__(self):
        super(dataset, self).__init__()

    def __len__(self):
        return 100
    
    def __getitem__(self, index):
        return torch.rand(5, 6), list(range(index))

dataloader = data.DataLoader(dataset=dataset(),
                      batch_size=4,
                      shuffle=True,
                      collate_fn=my_collate, # use custom collate function here
                      pin_memory=True)

for instance in dataloader:
    print(instance[0].shape, len(instance[1]))
    for labels in instance[1]:
        print('length', len(labels))
    raw_input()

It didn’t work :slightly_frowning_face: Error message told me that " ‘list’ has no attribute ‘cuda’ " , seems target didn’t turn to tensor.

I hope you had already solved the issue!

i have exactly problem as yours description,have you solve this problem?i also try this Manually calculate loss over a number of images and then back propagate the average loss and update network weight but the net didn’t converge,i 'd be very glad for your reply

RuntimeError Traceback (most recent call last)
in ()
6
7 for epoch in range(epochs):
----> 8 for inputs, labels in trainloader:
9 steps += 1
10 inputs, labels = inputs.to(device), labels.to(device)

~/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py in next(self)
613 if self.num_workers == 0: # same-process loading
614 indices = next(self.sample_iter) # may raise StopIteration
–> 615 batch = self.collate_fn([self.dataset[i] for i in indices])
616 if self.pin_memory:
617 batch = pin_memory_batch(batch)

~/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py in default_collate(batch)
230 elif isinstance(batch[0], container_abcs.Sequence):
231 transposed = zip(*batch)
–> 232 return [default_collate(samples) for samples in transposed]
233
234 raise TypeError((error_msg.format(type(batch[0]))))

~/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py in (.0)
230 elif isinstance(batch[0], container_abcs.Sequence):
231 transposed = zip(*batch)
–> 232 return [default_collate(samples) for samples in transposed]
233
234 raise TypeError((error_msg.format(type(batch[0]))))

~/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py in default_collate(batch)
207 storage = batch[0].storage().new_shared(numel)
208 out = batch[0].new(storage)
–> 209 return torch.stack(batch, 0, out=out)
210 elif elem_type.module == ‘numpy’ and elem_type.name != 'str

211 and elem_type.name != ‘string_’:

RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 344 and 339 in dimension 3 at /pytorch/aten/src/TH/generic/THTensorMoreMath.cpp:1307

It is because the image shapes are not the same in your datasets. For example, when process in batch, you cannot stack a [512, 512, 3] tensor and a [512, 317, 3] tensor together and feed them into a model. So, you have to do some tranformation to make them the same shape.

4 Likes

Thank you so much :sweat_smile: :sweat_smile: :sweat_smile: :sweat_smile:

Thanks, I had the same problem, This simple line of code solved my issue after struggling about 4 hours. :grinning:

I am encountering a similar problem. I am working on medical image segmentation on a dataset of nii images. I am getting the following error:
RuntimeError: Sizes of tensors must match except in dimension 1. Got 25 and 26 in dimension 2 (The offending index is 1)
PS: The size of images is (192,192,16) and I found this out by using the “first” function which returns the first element of the dataset. I want to find the size of all images in the dataset but I am confused totally as am a beginner. Also, is there a way to resize all the nii images to one size to be on the safer side as torch vision transforms dont work on nii files and monai transforms(which work on nii images) doesnt have any resize function. Please help me out.

I had the same issue. The training was working on one dataset but not working on a similar dataset and causing the same error. Just realized that some of the images in the dataset that caused the error had 4 channels instead of 3. Changing those images with 4 channels to 3 fixed the issue. Thanks for the help :slightly_smiling_face: :v:

Probably it got added late but Monai have resize function: Transforms — MONAI 0 documentation