Difference between seq_len and input_size in LSTM?

I am going through the docs:

but am a little confused:

Is seq_len the number of time units or number of words in a sentence
and input_size the embedding size or features?

Yes, in case of sentences, input_size derives from the size of your word or character embeddings. For example, if you use Word2Vec or GloVe embeddings of size 300 (i.e., each word is represented as a vector with 300 dimensions), the input_size of your LSTM needs to be 300.

1 Like