how to transfer variable length of language sentence into one batch?
I wanna train the neural conversation model, can I make batch without padding, and use the original variable length sentence to make batch for faster training?
it seems that the PackedSequences is not suitable for Seq2Seq Model, eg: Machine Translation, Conversation Model, because when you ordering the source sentence according to their length in the batch, but the target sentence won’t be ordered as well.
Am I right or wrong? Please tell me, thanks.
That’s right. You can reorder one or both batches after passing them to nn.LSTM
, but that does reduce the speed gain from using nn.LSTM
with PackedSequence
s over manually unrolled LSTMCell
s.
sorry, I don’t figure out how to reorder the batch after passing them to nn.LSTM
, can you make it more clear or show a simple example, thanks.