How to sort before pack_padded_sequence if you have more than one sentence

Hey, I have more than one language modality in my model. To use pack padded sequence you need to sort by the length of the question. But obviously the two modalities have different length order. Now I pass each one of them separately in an LSTM model, and concat the output afterwords. That means I need to return to the original order.

Does it makes sense to sort by length, and transform to original order after the LSTM? I’m worrying I’ll mess something with the gradients.


You can padd the 2 sequences to a fixed length (by using a pad sequence), sort them before passing into the lstm network, unsort them afterwards and then specify the pad sequence in the loss.