# RuntimeError: Sizes of tensors must match in dimenension 1

RuntimeError: Sizes of tensors must match in dimension 1. Expected size 3 but got 16 for tensor number 1 in the list. This error is occurring at line y = torch.cat([x,h],1). Please help to resolve this.

def forward(self,x,h,c):
x = self.W(x)
y = torch.cat([x,h],1)
i = self.Wy(y)
b = self.Wi(i)
ci = torch.sigmoid(self.Wbi(b)+cself.Wci(c))
cf = torch.sigmoid(self.Wbf(b)+c
self.Wcf(c))
cc = cfc+ciself.relu(self.Wbc(b))
c0 = torch.sigmoid(self.Wb0(b)+cc*self.Wc0)
ch = c0+self.relu(cc)
return cc, ch

`torch.cat` will try to concatenate multiple tensors in the specified dimension `dim`, but expects the tensors to have the same shape in all other dimensions.
Here is a small example:

``````# works, since dim0 has the same shape
x = torch.randn(3, 2)
h = torch.randn(3, 5)
y = torch.cat([x, h], 1)

# fails, since both dims have a different shape
x = torch.randn(3, 2)
h = torch.randn(16, 5)
y = torch.cat([x, h], 1)
# RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 3 but got size 16 for tensor number 1 in the list.
``````

Shape of x is [3,1024,10,10] and the shape of h is [16,256,10,10].
I tried to reshape ‘x’ to [16,1024,10,10] by using
x1 = x.repeat(5,1,1,1)
x = torch.cat([x1,torch.zeros(1,1024,10,10)],0).
Now, this give shape of x as [16,1024,10,10]. And, now, this gives out other error as “Runtimeerror: Sizes of tensors must match except in dimension 0. Expected size 256 but got size 1024 for tensor number 1 in the list”.

One more thing is that even if tried to reshape ‘h’ and keeping ‘x’ intact. The above error comes out in other way as “Runtimeerror: Sizes of tensors must match except in dimension 0. Expected size 1024 but got size 256 for tensor number 1 in the list”

You are most likely trying to concatenate the tensors in `dim0`, while the original use case used `dim1`:

``````x = torch.randn([3,1024,10,10])
h = torch.randn([16,256,10,10])

x1 = x.repeat(5,1,1,1)
x = torch.cat([x1,torch.zeros(1,1024,10,10)],0)

out = torch.cat((x, h), dim=0)
# RuntimeError: Sizes of tensors must match except in dimension 0. Expected size 1024 but got size 256 for tensor number 1 in the list.

# works
out = torch.cat((x, h), dim=1)
print(out.shape)
# torch.Size([16, 1280, 10, 10])
``````
1 Like

Thanks for the help. But, it gives out other error as “RuntimeError: Expected all tensors to be on the same device, but found atleast two devices, cuda:2 and cuda:0! (when checking argument for argument tensors in method wrapper_cat)”.

I have used only GPU 2. But, still this comes out as an error.

Check the `.device` of both tensors and make sure they are on the same GPU.

I have set the device now. But, I think the issues are coming up since the data loading isn’t happening properly. I need to load a batch size of 16 images (with 3 channels) as the input. But, the shape of x in the above code says only 3 images taken as its size is [3,1024,10,10]. How can I solve this issue?. Please help.

The data loading shouldn’t change the device at all, so I would recommend to fix the issues one-by-one as it seems you are now mixing up different errors.

Check the `.device` attribute of all involved tensors and in case they are using an unexpected GPU id, narrow down where this tensor is coming from, which part of the code set its device, and fix it.

Afterwards, check the `DataLoader` and make sure each batch has the expected shape. Once this is done, check my code to see how tensors with a different shape in one dimension can be concatenated to a single tensor.

Hi. Now, the device attribute is set and working fine. I did the following modifications to the code:
x1 = x.repeat(5,1,1,1)
zero_ten = torch.zeros(1,1024,10,10).to(“cuda:0”)
x = torch.cat([x1,zero_ten],0)
print(x.shape)
x = self.W(x)
print(h.shape)
y = torch.cat([x, h],1) #concatenate input and hidden layers
print(y.shape)
i = self.Wy(y) #reduce to hidden layer size
b = self.Wi(i) #depth wise 3*3
ci = torch.sigmoid(self.Wbi(b) + c * self.Wci)
cf = torch.sigmoid(self.Wbf(b) + c * self.Wcf)
cc = cf * c + ci * self.relu(self.Wbc(b))
co = torch.sigmoid(self.Wbo(b) + cc * self.Wco)
ch = co * self.relu(cc)

Now, I can see an error: Sizes of tensors must match in dimension 1. Expected size 81 but got size 16 for tensor number 1 in the list.

The sizes of x and h are [81,256,5,5] and [16,64,5,5], respectively.

Unfortunately, this won’t work since you are again trying to concatenate two tensors which have different sizes in more than a single dimension.
In particular, since you want to concatenate `x` and `h` in `dim1` make sure that all other dimensions have exactly the same size as it would fail otherwise. Based on your code I would guess the `repeat` approach might not be the right solution and you would need to check why the tensors have differ in multiple dimensions and what the concatenation should actually achieve using these two tensors.

Thank you for the answer