Hello, everyone
I am a newer of Pytorch. I have a question, Pytorch’s BiLSTM is the structure that take the same input and run forward and reversed direction respectively. And then concatenate the two output of the forward and reversed direction LSTM as the BiLSTM’s output. Just as the picture below shows:
My question is, does the Pytorch support another BiLSTM has the structure that: the reversed LSTM take the forward LSTM’s output as it’s input. Just as the picture shows:
If the pytorch doesn’t support this kind of structure, how can i implement it myself? And how to support the use of pad_packed_sequence in Pytorch for batching.
Your model could compute one layer at a time and reverse the output along the time dimension after each layer.
But supporting padded packed sequences will add a little complexity