I have a list of sequences and I padded it to the same length (`emb_len`

). I have a separate tensor that I want to concat it to every data point in the sequences.

Intuitively, it is something like this

a b c d e f g 0 0 0

u u u u u u u u u u

h i j k l 0 0 0 0 0

u u u u u u u u u u

but the correct one (I suppose) would be

a b c d e f g 0 0 0

u u u u u u u 0 0 0

h i j k l 0 0 0 0 0

u u u u u 0 0 0 0 0

I did something like

```
torch.cat([seq_embed,
torch.cat([second_embed.unsqueeze(1).expand(batch_size, emb_len, second_emb_len),
torch.zeros([batch_size, second_embed.size(1) - emb_len, second_emb_len], dytpe=torch.long)], 1)]
,2)
```

However, this is not going to be working because `emb_len`

is a tensor with variable numbers, which is something like `torch.LongTensor([1,2,3,4,5])`

and there will be errors like `a Tensor with # elements cannot be converted to Scalar`

. Thus, is there any way to solve this problem