How to set many-to-one network

I have this network , that i took mainly from this tutorial, and i want to have sentences as input (Which is already done) and just a one line tensor as a result.

From the tutorial, this sentence “John’s dog likes food” , gets a 1 column tensor returned:

tensor([[-3.0462, -4.0106, -0.6096],
[-4.8205, -0.0286, -3.9045],
[-3.7876, -4.1355, -0.0394],
[-0.0185, -4.7874, -4.6013]])

tag_list[ “name”, “verb”, “noun”]

where each line has the probability of a tag being associated with the word. (The first word has [-3.0462, -4.0106, -0.6096] vector where the last element corresponds to the maximum scoring tag)

The tutorial’s dataset looks like this:

training_data = [
    ("The dog ate the apple".split(), ["DET", "NN", "V", "DET", "NN"]),
    ("Everybody read that book".split(), ["NN", "V", "DET", "NN"])
]

And i want mine to be of this format:

training_data = [
    ("one man".split(), ["ONE"]),
    ("two birds".split(), ["TWO"]),
    ("three stones".split(), ["THREE"])
]

The parameters are defined as:

class LSTMTagger(nn.Module):
    def __init__(self, embedding_dim, hidden_dim, vocab_size, tagset_size):
        super(LSTMTagger, self).__init__()
        self.hidden_dim = hidden_dim
        self.word_embeddings = nn.Embedding(vocab_size, embedding_dim)
        self.lstm = nn.LSTM(embedding_dim, hidden_dim)
        self.hidden2tag = nn.Linear(hidden_dim, tagset_size)

    def forward(self, sentence):
        embeds 		= self.word_embeddings(sentence)
        lstm_out, _ = self.lstm(embeds.view(len(sentence), 1, -1))
        tag_space	= self.hidden2tag(lstm_out.view(len(sentence), -1))
        tag_scores 	= F.log_softmax(tag_space, dim=1)
        return tag_scores

As of now, the sizes from input and output are not matching:
ValueError: Expected input batch_size (2) to match target batch_size (1).

I solved this issue, by simply getting the hidden states of the last layer from lstm_out

I am sorry for the inconvenience.