I am using the `nn.LSTM()`

layer with bidirectional set to True. I have a 3 layer network, so I get a `hidden`

tensor that has is [6, BatchSize, input_size]. Now, the first dimension of `hidden`

contains both the forward and backward direction of the LSTM for each of the 3 layers, but I was not clear on how that tensor is laid out. So does `hidden[0, :, :]`

correspond to the first or bottom layer in the forward direction, and then `hidden[1,:,:]`

correspond to the bottom layer in the backward direction?

I actually need to concatenate the hidden nodes from the forward and backward pass on each layer, so I imagine that would be something like the code below, but can someone correct me if I am wrong.

```
output, (hidden, cell) = self.lstm1(x)
new_hidden = torch.cat((hidden[0::2, :. :], hidden[1::2, :, :], dim=2)
```

So new dimension should be of size [3, BatchSize, input_size].