Multi-layer, Bi-directional LSTM's

As per the documentation on PyTorch LSTMs, the final hidden state hn, when bidirectional = True and num_layers = 4 and batch_first = True will have shape ( 2 ∗ 4 , batch_size , H_out). How can I extract out the hidden state output by only the last layer (or the “topmost” layer , not the last time-step). How can I extract out only the hidden state from forward propagation? I couldn’t find the correct scheme to unflatten() hn to do the above.