Get the last hidden state of sequence on a batch with variable sequence length

I want to get the last hidden state in a batch (with different length) after feeding through unidirection nn.LSTM (not the padded state). My current approach is:

List[Tensor] -> Padded Tensor -> PackPaddedSequence -> LSTM -> PadPackedSequence -> Select hidden state of last step using length

a = torch.ones(25, 300)
b = torch.ones(22, 300)
c = torch.ones(15, 300)
padded_seq = pad_sequence([a, b, c]) # torch.Size([25, 3, 300])
lengths = torch.Tensor([25, 22, 15]).int()
inp_seq = pack_padded_sequence(padded_seq, lengths=lengths)

lstm = torch.nn.LSTM(input_size=300, hidden_size=150, num_layers=2)
out, _ = lstm(inp_seq)
out_tensor, inp_length = pad_packed_sequence(out)
b_size = list(inp_length.size())[0]
last_hidden = out_tensor[inp_length - 1, range(b_size)].contiguous()

My question is:

  • Am I doing a correct way to get that hidden state ? (I felt it a little clumsy)
  • How can I use it with bidirectional=True ?