In LSTM why h_t output twice?

According the LSTM design:

image

The hidden state (ht) is output twice (1 and 2 in the picture).

  1. If they are the same, why we need them twice ?
  2. Is there a different use for each one of them ?
  3. According to

nn.lstm

there are 3 outputs (output, h_n, c_n).
I didnt understand what is the different between output and h_n ? (Doesn’t they need to be the same) ?

Hi.
LSTM is type of Recurrent Neural Networks, so current output (output at time t) will affect on processing of input at time t+1 (each input at specific time known as time-step)
In according to above explanation, we need two ht (one for next layer and one for next time-step).
I think this diagram can help you:

in pytorch, lstm has three outputs:
output: what we use it for next or output layer.
h_n: same output (when we need it to use for next time-step), known as hidden-state
c_n: cell-state (upper horizontal line with two operators on it _ multiplication and sum)

1 Like