Libtorch's LSTM input shape

Hello everyone. Please someone to explaine me the shape of LSTM input " tensor of shape (L,Hin) for unbatched input, (L,N,Hin) when batch_first=False or (N,L,Hin​) when batch_first=True containing the features of the input sequence." I want to know the difference between these two shapes: (L,N,Hin) , (N,L,Hin​).

They have just switched dimensions between L and N. See documentation:

batch_first – If True, then the input and output tensors are provided as (batch, seq, feature) instead of (seq, batch, feature).

Thank for the reply. I know that they just switched the dimensions. If i want to fed a sequence to a LSTM which one should i use concretely ?

You can use any of those two variants. You just have to align your inputs with batch_first setup.

OK, I see. Thank you

1 Like