LSTM with doc2vec word embedding

An LSTM takes a sequence. To get a meaningful use out of the LSTM, you need to pass in a sequence of length greater than 1. If you’re condensing the entirety of your sequence into a vector, then you could probably just use a fully connected network(linear layers and activations) to accomplish your task.

But if you have sequences of sentences, that are each being encoded to a vector, you can pass those in order into an LSTM either all at once or one after another. See here for an example with code: LSTM on Time series with CrossEntropyLoss is unstable - #9 by J_Johnson