How to transfer variable length of language sentence into one batch?

how to transfer variable length of language sentence into one batch?
I wanna train the neural conversation model, can I make batch without padding, and use the original variable length sentence to make batch for faster training?

you can look at examples here: https://github.com/pytorch/pytorch/releases/tag/v0.1.10

it seems that the PackedSequences is not suitable for Seq2Seq Model, eg: Machine Translation, Conversation Model, because when you ordering the source sentence according to their length in the batch, but the target sentence won’t be ordered as well.
Am I right or wrong? Please tell me, thanks.

That’s right. You can reorder one or both batches after passing them to nn.LSTM, but that does reduce the speed gain from using nn.LSTM with PackedSequences over manually unrolled LSTMCells.

sorry, I don’t figure out how to reorder the batch after passing them to nn.LSTM, can you make it more clear or show a simple example, thanks.