# Concat two tensors with different dimensions

Hi all,

Is it possible to concat two tensors have different dimensions?

for example: if `A of shape = [16, 512]` and `B of shape = [16, 32, 2048]`

How they could combined to be of shape `[16, 544, 2048]` ?

I’m not sure, how you would like to fill dim2 in your first tensor, but if you just want to repeat the values, this code would work:

``````a = torch.randn(16, 512)
b = torch.randn(16, 32, 2048)
a = a.unsqueeze(2).expand(-1, -1, 2048)
c = torch.cat((a, b), dim=1)
print(c.shape)
> torch.Size([16, 544, 2048])
``````
6 Likes

Hi Ptrblck,
I hope you are well. Sorry I need to concatenate two tensors x and y with the size of 64x100x9x9. 64 is batch size, 100 is the number of channel and 9x9x is the width and height.
I want to have concatenated result in 64x100x18x18 form
I used torch.cat[x,y],dim??)

You won’t be able to create the output as `[64, 100, 18, 18]`, since this would double the number of all available elements.
However, you could create a tensor in the shape `[64, 100, 18, 9]` or `[64, 100, 9, 18]` via:

``````x = torch.randn(64, 100, 9, 9)
y = torch.randn(64, 100, 9, 9)

print(torch.cat((x, y), dim=2).shape)
print(torch.cat((x, y), dim=3).shape)
``````

I have these shapes and I want to use torch.cat
torch.Size([64, 27, 27]) torch.Size([3, 224, 224]) torch.Size([192, 13, 13])
but i got this error
RuntimeError: Sizes of tensors must match except in dimension 2. Expected size 64 but got size 3 for tensor number 1 in the list.
I need to visualize them that’s why I need to reshape first and last tensor to the middle one ([3, 224, 224])

You won’t be able to concatenate these tensors, as all dimensions differ in their size (as also given in the error message).
To concatenate tensors all dimensions besides that one used for concatanation must be equal:

``````a = torch.randn(2, 224, 224)
b = torch.randn(5, 224, 224)
c = torch.cat((a, b), dim=0) # works since only dim0 differs in size
``````

Actually, I’m trying to visualize the feature of an image using the pre_trained alexnet, and UndoableConvLayer class should transpose the convolution, I think It should return the tensor as same as image shape but I don’t know how can I do that

``````net = models.alexnet(pretrained=True)
layer1 = UndoableConvLayer(net.features, net.features)
layer2 = UndoableConvLayer(net.features, net.features)

print(x_im.shape)
print(layer1(x_im).shape)
torch.Size([3, 224, 224])
torch.Size([64, 27, 27])
``````

I don’t know what `UndoableConvLayer` is doing internally, but it seems it returns a 5-dimensional tensor in a shape which is neither the input shape nor compatible to an image format so you might want to revisit this layer’s implementation.

``````class UndoableConvLayer(nn.Module):
def __init__(self, conv: nn.Conv2d, pool: nn.MaxPool2d):

super().__init__()

self._cache = None

self.conv = conv

self.pool = pool

self.pool.return_indices = True

def undo(self, a: torch.Tensor, idx: torch.Tensor, out_pad: int = 0) -> torch.Tensor:

from torch.nn.functional import max_unpool2d, conv_transpose2d

# self.pool.return_indices = True

x = max_unpool2d(a,indices=idx)

return x

def forward(self, x):

s = self.conv(x)

out, idx = self.pool(s)

a = torch.relu(out)

return a, idx
``````

Now I understand where is the problem, but I didn’t solve it yet, when I pass the image through ```
layer1

``It's first time I face this problem and I tried really long time but I couldn't solve it``