Unpacked sequences are of different lengths than expected

Hi,

Was hoping anyone could confirm that my understanding is correct and maybe even explain why that happens because I don’t see any clues in the documentation.

I’m training an LSTM using variable length data sequences and I noticed that after unpacking lengths of training sequences differ from validation set although I pad all data points to the same max length when initializing the dataset. I believe they end up truncated to the max length of that particular batch. I am packing/unpacking the two batches separately in the training loop (my dataset is pretty small so I feed the entire training data as one batch).

Thank you!