Utility of padded_seq_batch

I understand why we use padded_seq_batch when training RNNs with variable lengths input, but I can’t help but wonder :
Why is it so complicated ? Couldn’t we just have made a list of all the input sequences of the batch and then iterate over it ?

I suppose this is probably a performance issue, but I don’t really understand why.

Can somebody explain it in simple terms ?

This is exactly what pack padded sequence does. Check this out

https://pytorch.org/docs/stable/nn.html#torch.nn.utils.rnn.pack_padded_sequence