If the input tensor is (batch_size), the value is the sequence length, and I want to convert this to tensor(batch_size, max_seq_len) to feed into position embedding. How to do that by pytorch?
For example:
Input tensor: (2, 4, 3)
Output tensor: ((0, 1, 4, 4), (0, 1, 2, 3), (0, 1, 2, 4)) (‘4’ is the padding)