Concatenate tensor of 3 dimensions to tensor of 1 dimension while keeping first dimension

Suppose that I have a tensor of shape

tensor1 = [sentence length, batch_size, embedding dimension]

for instance: torch.Size([4, 32, 768])

I want to add a value to the embedding dimension (768 -> 769).

res = torch.cat((embedding[-1,:,:], batch.feat.unsqueeze(1)), dim=1)

Where, batch.feat is of size [32,1]

but this leads to:

>>> res.size() 
torch.Size([32, 769])

How can I keep the sentence length dimension and have a tensor of shape?

torch.Size([4,32, 769])

(I have posted a related question on this but what I wanted to do didn’t make sense in that case).

What you are doing:

You concatenate a tensor embedding[-1,:,:] of shape {32, 768} (you only select the last element of the first dimension) to a tensor batch.feat.unsqueeze(1) along the second dimension (dim=1).
of shape {32, 1} ( i guess batch.feat is of shape {32})


But you want to do:

embedding = torch.randn(4, 32, 768, dtype=torch.float)
batch= torch.randn(32, dtype=torch.float)[None, :, None]

print(embedding.shape, batch.shape)
>> torch.Size([4, 32, 768]) torch.Size([1, 32, 1])

tensor_cat = torch.cat((embedding, batch.repeat(4, 1, 1)), dim=2)

print(tensor_cat.shape)
>> torch.Size([4, 32, 769])

So you keep your embedding tensor as a 3d tensor, but reshape your batch.feat to a 3d tensor of shape {1, 32, 1}. Because your embedding tensor is of shape {4, 32, 1} you need to repeat your batch tensor along the first dim, so they are of the same shape. Finally you can concatenate them along the third dimension (dim=2 - your embedding dimension).

Thank you so much!!!