Hey, i would like to know how i can concatenate two tensors like this:
t1 = torch.rand(2, 10, 512) t2 = torch.rand(2, 768)
and get tensor like this:
>>> torch.Size([2, 10, 1280])
Let’s assume that shapes are:
t1_shape = (batch_size, sequence_len, embedding_dim) t2_shape = (batch_size, embedding_dim)
I want to concatenate these tensors along the
embedding_dim, so every different tensor in
sequence_len dimension will be concatenated with the same t2 Tensor.
As a solution i see:
t2 = t2.unsqueeze(1) t2.size() >>> torch.Size([2, 1, 768]) t2 = torch.cat((t2,) * 10, dim=1) t2.size() >>> torch.Size([2, 10, 768]) torch.cat((t1, t2), dim=2)
But i’m afraid this approach costs memory when duplicating t2 Tensor 10 times (in reality it can be much more).
Is there any memory efficient solution?