I am trying to use the pytorch.text for my sequence tagger model. Basically, the model predicts the Part-Of-Speech tag for each token in the sentence. My model consists of one LSTM and one Bidirectional-LSTM. First a character based LSTM creates the word embedding for the target token by processing characters and then the second LSTM takes these word embeddings as input and process them (in both directions) to predict the POS tag for the target word.
Here the problem is that, I need to batch both the characters and words. Is there a way to do that with pytorch.text ?
Let me give an example:
Sentence: This is an example sentence .
First LSTM is going to create a word embedding for each word in the sentence W1,W2,W3,W4,W5,W6
For example, it produces W2 by processing the characters of the second word “is” which are “i” and “s”. Then, the forward LSTM (of bi-LSTM) process W1,W2 and backward LSTM (of bi-LSTM) process W6,W5,W4,W3,W2. Then I concatenate the hidden states and make a prediction.
I am not asking the model implementation. I am asking if pytorch.text is flexible enough to prepare the data for such a problem.
As far as I see, here I can not minibatch inputs for the LSTM and inputs for the bi-LSTM at the same time. I guess batching sentences with the same number of WORDS seems more feasible but I am also open to other suggestions. I would also appreciate if you show many any similar code snippets.