LSTM documentation

Can someone please explain to me how is the documentation interpreter?
Why it has this ’ (L, N, H_{in})’ when in fact it takes an integer and not a tuple?

Could you explain in detail?

I am a bit new in LSTM, I read online how they work but I am bit confused how to implement them. Can I have as input shape a tuple? Also, how can I define the number of LSTMs stack on top of each other and how can I define different designs such as many-to-many

Sorry, it seems good to you to read other materials before asking something here.
Try googling with keywords ‘lstm pytorch example’ or something.

Thanks

I had a fresh look again today over the documentation and some examples. Just to confirm what I read.

lstm = nn.LSTM(input_size = 3,hidden_size = 4,num_layers = 2, batch_first = True, bidirectional=True)

This means, that the input tensor has the structure of (batch_size, seq_len, input_size) and the output tensor has the shape of (batch_size, seq_len, 2*hidden_size) since bach_first = True.
Also, the 2hidden_size arises because bidirectional=True.
Lastly, the num_layers determine the pairs of (hidden_state, cell_state) of the network.
inputs.shape = (100, 5, 3), hidden.shape = (100, 3, 2
4)
out, hidden = lstm(inputs, hidden)
where:
‘out’ has all the new hidden state values of the network
‘hidden’ is a tuple of (hidden_state, cell_state) of the last layer only.

1 Like

You could watch this video torch.nn.RNN Module explained. It is very similar to torch.nn.LSTM