About nn.LSTM using with train_loader

Hello everyone.

I’m currently learning to use nn.LSTM with pytorch and a bit confused with the description of the input of the function.

Basically I’m trying to feed my dataset matrix (M x N) as M timesteps, N numbers of features.
Since the dataset is a matrix, I wanted to feed the dataset recursively into the LSTM network with Dataloader(utils.data.Dataset).

The point where i got confused was the size of input (seq_len, batch, input_size)

Let’s say I’m getting my data_loader with batch_size=10.
In order to generate the train_loader with the right form, I had to make the previous size of (M x N) into the size including the sequence_length which could simply be transformed to (M/seq_len, seq_len, N).

Then the input size of my nn.LSTM would be like:
(M/seq_len/batch_size, seq_len, N)

So, my main question comes:
If i feed this data size into the LSTM model nn.LSTM(N, hidden_size),
is the LSTM model automatically doing the “recursive operation” within the sequence of the whole batch?

I wanted to know the actual behavior that’s done within this LSTM layer… anyways the operation should be done recursively within both the sequence, And the batch right?

I’m not sure i made the questionaire clear because this is actually my current status working on this nn.LSTM(quite messed up…lol), hope someone could help me organizing the ideas.

Thanks in advance :slight_smile: