RuntimeError: The size of tensor a (3) must match the size of tensor b (2) at non-singleton dimension 0

In the first run, the code ran well, but from the second run the system has started throwing error in the following:

for i, (inputs, labels) in enumerate(datasets[‘test’]):
print(labels)

datasets[‘test’] has multiple objects mainly

  1. data which is a tensor of size [20000, 32, 32, 3]
  2. labels which is an ndarray of size [20000]

I am working on pytorch, python 3.6, windows 10

Could you post the complete error message including the stack trace, please?
Based on the title, the shape mismatch is raised in dim0, but it’s hard to guess what exactly is failing.

Thanks for your response.
I am using the following repo with customizations for my data:

When I run the main.py I get the same error as I get while compiling in console line by line…
Following is the complete stack of error:

Traceback (most recent call last):
File “F:/Richa/Richa_Python_codes/Variational-Capsule-Routing/src/main_r.py”, line 136, in
score = main(parser.parse_known_args()[0])
File “F:/Richa/Richa_Python_codes/Variational-Capsule-Routing/src/main_r.py”, line 107, in main
score = train(model, dataloaders, args)
File “F:\Richa\Richa_Python_codes\Variational-Capsule-Routing\src\train.py”, line 103, in train
test_loss, test_acc = evaluate(model, args, dataloaders[‘test’])
File “F:\Richa\Richa_Python_codes\Variational-Capsule-Routing\src\evaluate.py”, line 13, in evaluate
for i, (inputs, labels) in enumerate(dataloader):
File “C:\Users\user.conda\envs\gpu-test4\lib\site-packages\torch\utils\data\dataloader.py”, line 435, in next
data = self._next_data()
File “C:\Users\user.conda\envs\gpu-test4\lib\site-packages\torch\utils\data\dataloader.py”, line 475, in _next_data
data = self.dataset_fetcher.fetch(index) # may raise StopIteration
File “C:\Users\user.conda\envs\gpu-test4\lib\site-packages\torch\utils\data_utils\fetch.py”, line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File “C:\Users\user.conda\envs\gpu-test4\lib\site-packages\torch\utils\data_utils\fetch.py”, line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File “F:\Richa\Richa_Python_codes\Variational-Capsule-Routing\src\datasets_r.py”, line 120, in getitem
image = self.transform(self.data[idx])
File “C:\Users\user.conda\envs\gpu-test4\lib\site-packages\torchvision\transforms\transforms.py”, line 67, in call
img = t(img)
File “C:\Users\user.conda\envs\gpu-test4\lib\site-packages\torch\nn\modules\module.py”, line 727, in call_impl
result = self.forward(*input, **kwargs)
File “C:\Users\user.conda\envs\gpu-test4\lib\site-packages\torchvision\transforms\transforms.py”, line 226, in forward
return F.normalize(tensor, self.mean, self.std, self.inplace)
File “C:\Users\user.conda\envs\gpu-test4\lib\site-packages\torchvision\transforms\functional.py”, line 284, in normalize
tensor.sub
(mean).div
(std)
RuntimeError: The size of tensor a (3) must match the size of tensor b (2) at non-singleton dimension 0

Thanks for posting the stack trace.
It seems normalize raises the shape mismatch error.
Could you check how many values you’ve passed to transform.Normalize and the number of channels for each input image?
It seems that you might have used two values for the mean and std in Normalize while your input contains 3 channels as seen here:

x = torch.randn(3, 24, 24)
norm = transforms.Normalize((0.5, 0.5), (0.5, 0.5))
out = norm(x)
> RuntimeError: The size of tensor a (3) must match the size of tensor b (2) at non-singleton dimension 0
1 Like

Thanks a lot for your suggestion!
I was making exactly the mistake you have pointed out here during normalization but when I ran the code it threw another error:
RuntimeError: multi-target not supported at C:/cb/pytorch_1000000000000/work/aten/src\THCUNN/generic/ClassNLLCriterion.cu:15

I checked other posts where you have mentioned one-hot is not applicable, mine are labelled images where images and labels are separate variables.

Full stack of the same is the following:
Traceback (most recent call last):
File “F:/Richa/Richa_Python_codes/Variational-Capsule-Routing/src/main_r.py”, line 136, in
score = main(parser.parse_known_args()[0])
File “F:/Richa/Richa_Python_codes/Variational-Capsule-Routing/src/main_r.py”, line 107, in main
score = train(model, dataloaders, args)
File “F:\Richa\Richa_Python_codes\Variational-Capsule-Routing\src\train.py”, line 103, in train
test_loss, test_acc = evaluate(model, args, dataloaders[‘test’])
File “F:\Richa\Richa_Python_codes\Variational-Capsule-Routing\src\evaluate.py”, line 22, in evaluate
loss = F.cross_entropy(yhat, labels.cuda())
File “C:\Users\user.conda\envs\gpu-test4\lib\site-packages\torch\nn\functional.py”, line 2468, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File “C:\Users\user.conda\envs\gpu-test4\lib\site-packages\torch\nn\functional.py”, line 2264, in nll_loss
ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: multi-target not supported at C:/cb/pytorch_1000000000000/work/aten/src\THCUNN/generic/ClassNLLCriterion.cu:15

The new error is raised, e.g. if you are passing a target with an additional dimension (which would be the case for a one-hour encoded target).
For a standard multi-classification use case the model output should have the shape [batch_size, nb_classes], while the target should contain the class indices in the range [0, nb_classes] in the shape [batch_size].

Thanks a lot for your suggestion, my target has shape [batch_size, test_sample_size] which is creating problem, although my model gives the output in the shape [batch_size, nb_classes]