I understand why we use
padded_seq_batch when training RNNs with variable lengths input, but I can’t help but wonder :
Why is it so complicated ? Couldn’t we just have made a list of all the input sequences of the batch and then iterate over it ?
I suppose this is probably a performance issue, but I don’t really understand why.
Can somebody explain it in simple terms ?