In the documentation of nn.RNN it is written that “x_t is the hidden state of the previous layer at time t or input_t for the first layer.” I couldn’t understand the “hidden state of the previous layer” part. Is not x_t just input of RNN?

Hi, suppose you pile two `RNN`

layers. Then the input to the first layer is of course `input_t`

then the input to the second layer is the output from the first layer, that is, hidden state of the first layer.

Thanks for the answer. I am very new to area so maybe I am wrong. Correct me when I am wrong please.

In the Elman network, inputs to a hidden layer at time t are outputs of the hidden layer from t-1 and the inputs fed to network at time t: h_t = f(x_t, h_{t-1}). So, x_t and h_{t-1} are supposed to be two different inputs. But what I understood from the documentation and your explanation is that h_{t-1}=x_t.