nn.Embedding each time step has a variable-length input in mini-batch training

According to this tutorial, if we want to embed variable-length sequence, we could pad the sequence to make them in the following form:
tensor([[ 387, 62, 1250, 25, 7384],
[ 25, 4, 25, 73, 4],
[ 177, 463, 200, 60, 2],
[ 9, 92, 483, 480, 0],
[ 971, 51, 25, 66, 0],
[ 4, 4, 2, 2, 0],
[ 4, 2, 0, 0, 0],
[ 4, 0, 0, 0, 0],
[ 6, 0, 0, 0, 0],
[ 2, 0, 0, 0, 0]])
whose dim=(max_length, batch_size), then you could embed them with nn.Embedding. And we could see here, for each time step, there is only one input for each sequence.

So my question is, if I have multiple inputs for each time step, the mini-batch input would looks like this:
tensor([[[4, 3], [5, 8], [7, 9, 6], [5, 4, 3], [8, 8]],
[[4, 3], [5, 8], [7, 9, 6], [5, 4, 3], [2]],
[[4, 3], [5, 8], [7, 9, 6], [2], [0]],
[[4, 3], [5, 8], [2], [0], [0]],
[[4, 3], [5, 8], [0], [0], [0]],
[[4, 3], [5, 8], [0], [0], [0]],
[[4, 3], [2], [0], [0], [0]],
[[2], [0], [0], [0], [0]]])
is there a ‘pytorch’ way to appropriately embed such a mini-batch inputs ?
Thanks very much.