Facing an issue while using batching in LSTM

I see several issues:

  • You create your nn.LSTM with batch_first=True, but then you want to reshape your input so that the sequence length is your first dimension: embeds.view(len(sentence), 1, -1)

  • Your view() commands might be a problem anyway; please see this post of mine. The following change should actually work:

    lstm_out, _ = self.lstm(embeds)