Suppose that I have the following LSTM:
output, (hidden_state, cell_state) = nn.LSTM(5,100, num_layers = 1, bidirectional = True, batch_first = False)
Where hidden_state is of shape
torch.Size([2, 10, 100])
And that I want to concatenate the final forward and final backward layer of hidden_state
:
torch.cat((hidden_state[-2,:,:], hidden_state[-1,:,:]), dim = 1))
Which results to the shape
torch.Size([10, 100])
How can I do this concatenation without losing the first dimension of hidden_state, which is here 2 (1 x 2)?