RuntimeError: invalid argument 0:

I got error like this.

Traceback (most recent call last):
File “train11.py”, line 90, in
for data, target in train_bar:
File “/home/mhha/.conda/envs/pytorchmh2/lib/python3.5/site-packages/tqdm/_tqdm.py”, line 941, in iter
for obj in iterable:
File “/home/mhha/.conda/envs/pytorchmh2/lib/python3.5/site-packages/torch/utils/data/dataloader.py”, line 267, in next
return self._process_next_batch(batch)
File “/home/mhha/.conda/envs/pytorchmh2/lib/python3.5/site-packages/torch/utils/data/dataloader.py”, line 301, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
RuntimeError: Traceback (most recent call last):
File “/home/mhha/.conda/envs/pytorchmh2/lib/python3.5/site-packages/torch/utils/data/dataloader.py”, line 55, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File “/home/mhha/.conda/envs/pytorchmh2/lib/python3.5/site-packages/torch/utils/data/dataloader.py”, line 135, in default_collate
return [default_collate(samples) for samples in transposed]
File “/home/mhha/.conda/envs/pytorchmh2/lib/python3.5/site-packages/torch/utils/data/dataloader.py”, line 135, in
return [default_collate(samples) for samples in transposed]
File “/home/mhha/.conda/envs/pytorchmh2/lib/python3.5/site-packages/torch/utils/data/dataloader.py”, line 112, in default_collate
return torch.stack(batch, 0, out=out)
File “/home/mhha/.conda/envs/pytorchmh2/lib/python3.5/site-packages/torch/functional.py”, line 66, in stack
return torch.cat(inputs, dim, out=out)
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 3 and 1 in dimension 1 at /opt/conda/conda-bld/pytorch_1522182087074/work/torch/lib/TH/generic/THTensorMath.c:2897

What is the problem?

The error message is a bit hard to interpret without the code.
It seems you are trying to load images using a DataLoader.
Some of these images seem to have 3 channels (color images), while others might have a single channel (BW images).
Since the dimension differ in dim1, they cannot be concatenated into a batch.
You could try to add img = img.convert('RGB') into your __getitem__ in your Dataset.

Yes! you’re right!

I use ImageNet figures. Almost figures are colored images, but some images are BW images.

Thanks a lot!!

I will add your advice into my code!!

I got error like this.

AttributeError: ‘torch.FloatTensor’ object has no attribute ‘convert’

What should i import??

I already imported “from PIL import Image”

.convert is a method of PIL.Image, so you have to use it after you’ve loaded the image and before transforming it to a tensor.

Are you using a Dataset or a class like ImageFolder?
If you are using the latter, the transformation should already be performed in this line.

If you are using your own Dataset, you should add it into the __getitem__ method after the image was loaded:

class MyDataset(Dataset):
    def __init__(self, image_paths, transforms=transforms):
        self.image_paths = image_paths
        self.transforms = transforms

    def __getitem__(self, index):
        image = Image.open(self.image_paths[index])
        image = image.convert('RGB')
        if self.transforms:
            image = self.transforms(image)
        return image

I skipped the target part. Let me know, if this works for you!

1 Like

thanks a lot! It will help me a lot!!

Hi @ptrblck,
I have a custom dataset that returns the image and a label with it.
I am getting the same error,
but its because I am returning the labels as well in the getitem method.

How do you suggest I handle this?

Thank You

Hi Guys,
I found a solution to the problem.
It was not because I was also returning the labels. It was because some images had 3 channels while some had only 1.
I fixed it by adding .convert(‘RGB’) to the image loading line.

Thanks

I also encountered the same problem, I used Dataset_folder to load the dataset, how do you solve it?

RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0. Got 2 and 1 in dimension 1 at /opt/conda/conda-bld/pytorch_1556653099582/work/aten/src/TH/generic/THTensor.cpp:711

Can you post your code?

I encountered a error when I start the trainning.The error is :
Traceback (most recent call last):
File “tools/train.py”, line 142, in
main()
File “tools/train.py”, line 138, in main
meta=meta)
File “/cache/user-job-dir/codes/mmdetection/mmdet/apis/train.py”, line 111, in train_detector
meta=meta)
File “/cache/user-job-dir/codes/mmdetection/mmdet/apis/train.py”, line 305, in non_dist_train
runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
File “/home/work/anaconda3/lib/python3.6/site-packages/mmcv/runner/runner.py”, line 371, in run
epoch_runner(data_loaders[i], **kwargs)
File “/home/work/anaconda3/lib/python3.6/site-packages/mmcv/runner/runner.py”, line 275, in train
self.model, data_batch, train_mode=True, **kwargs)
File “/cache/user-job-dir/codes/mmdetection/mmdet/apis/train.py”, line 75, in batch_processor
losses = model(**data)
File “/home/work/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 493, in call
result = self.forward(*input, **kwargs)
File “/home/work/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py”, line 150, in forward
return self.module(*inputs[0], **kwargs[0])
File “/home/work/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 493, in call
result = self.forward(*input, **kwargs)
File “/cache/user-job-dir/codes/mmdetection/mmdet/core/fp16/decorators.py”, line 49, in new_func
return old_func(*args, **kwargs)
File “/cache/user-job-dir/codes/mmdetection/mmdet/models/detectors/base.py”, line 137, in forward
return self.forward_train(img, img_meta, **kwargs)
File “/cache/user-job-dir/codes/mmdetection/mmdet/models/detectors/cascade_rcnn.py”, line 231, in forward_train
feats=[lvl_feat[j][None] for lvl_feat in x])
File “/cache/user-job-dir/codes/mmdetection/mmdet/core/bbox/samplers/base_sampler.py”, line 75, in sample
assign_result.add_gt
(gt_labels)
File “/cache/user-job-dir/codes/mmdetection/mmdet/core/bbox/assigners/assign_result.py”, line 192, in add_gt_
self.labels = torch.cat([gt_labels, self.labels])
RuntimeError: invalid argument 0: Tensors must have same number of dimensions: got 1 and 2 at /pytorch/aten/src/THC/generic/THCTensorMath.cu:62
How can i solve the problem?

Could you check the dimensions of gt_labels and self.labels?
It seems self.labels uses two dimensions, while gt_labels has only one.
I’m not familiar with your use case, but you might be able to squeeze() one dimension of self.labels.

Thanks,i’ve sove it!

1 Like

How can you slove? I am facing a same problem