# Concatenation of two tensors

Hi,
I have two tensors of shape [12, 39,1024] and [12, 39,1024]. I want to concatenate them depth-wise but in a one-on-one fashion. For example, the first feature map of the first tensor is attached to the first feature map of the second tensor. So the sequence would be [(Feature map of 1st) Concatenated with (Feature map of 2st) ]. If i use existing PyTorch concatenation operations like a torch.cat(), then it will simply concatenate the second tensor at the end. Is there any pytorch function which can help me achieve what i need?

Regards

I believe you are looking for the ‘dim’ argument of torch.cat

• dim (int, optional) – the dimension over which the tensors are concatenated
``````>>> a = torch.rand((124,39,1024))
>>> b = torch.rand((124,39,1024))
>>> torch.cat((a,b)).shape
torch.Size([248, 39, 1024])

>>> torch.cat((a,b), dim=0).shape
torch.Size([248, 39, 1024])

>>> torch.cat((a,b), dim=1).shape
torch.Size([124, 78, 1024])

>>> torch.cat((a,b), dim=2).shape
torch.Size([124, 39, 2048])
``````

I dont think this could help me, I want to only concatenate depth-wise with mix feature maps

I’m not sure I understand the use case correctly and what “feature” maps refer to in this case, but this might work:

``````a = torch.zeros(2, 3, 4)
a = 1.

b = torch.ones(2, 3, 4) * 2
b = 3

c = torch.cat((a, b), dim=1)
print(c)
> tensor([[[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[2., 2., 2., 2.],
[2., 2., 2., 2.],
[2., 2., 2., 2.]],

[[1., 1., 1., 1.],
[1., 1., 1., 1.],
[1., 1., 1., 1.],
[3., 3., 3., 3.],
[3., 3., 3., 3.],
[3., 3., 3., 3.]]])

d = c.clone()
d = d.view(d.size(0), 2, d.size(1)//2, d.size(2)).permute(0, 2, 1, 3).contiguous().view_as(c)
print(d)
> tensor([[[0., 0., 0., 0.],
[2., 2., 2., 2.],
[0., 0., 0., 0.],
[2., 2., 2., 2.],
[0., 0., 0., 0.],
[2., 2., 2., 2.]],

[[1., 1., 1., 1.],
[3., 3., 3., 3.],
[1., 1., 1., 1.],
[3., 3., 3., 3.],
[1., 1., 1., 1.],
[3., 3., 3., 3.]]])
``````
1 Like
``````import codecs, time, json
import numpy as np
import torch

a = torch.rand((1, 2, 3, 4))
b = torch.ones(1, 2, 3, 4) * 2
# tensor([[[[2., 2., 2., 2.],
# 		  [2., 2., 2., 2.],
# 		  [2., 2., 2., 2.]],
#
# 		 [[2., 2., 2., 2.],
# 		  [2., 2., 2., 2.],
# 		  [2., 2., 2., 2.]]]])
#
c = torch.cat((a, b), dim=2)
# tensor([[[[0., 0., 0., 0.],
# 		  [0., 0., 0., 0.],
# 		  [0., 0., 0., 0.],
# 		  [2., 2., 2., 2.],
# 		  [2., 2., 2., 2.],
# 		  [2., 2., 2., 2.]],
#
# 		 [[0., 0., 0., 0.],
# 		  [0., 0., 0., 0.],
# 		  [0., 0., 0., 0.],
# 		  [2., 2., 2., 2.],
# 		  [2., 2., 2., 2.],
# 		  [2., 2., 2., 2.]]]])
#
#
d = c.clone()
# print(d.size()) # torch.Size([1, 2, 6, 4])
d = d.view(d.size(0), 4, d.size(2)//2, d.size(3))
d.permute(0, 2, 1, 3).contiguous().view_as(c)
print(d)
print(d.shape)
print('------')
print(a)
print(b)
print('------')

``````