Talita
May 17, 2020, 10:56am
#1
Suppose that I have the following LSTM:

```
output, (hidden_state, cell_state) = nn.LSTM(5,100, num_layers = 1, bidirectional = True, batch_first = False)
```

Where hidden_state is of shape

```
torch.Size([2, 10, 100])
```

And that I want to concatenate the final forward and final backward layer of `hidden_state`

:

```
torch.cat((hidden_state[-2,:,:], hidden_state[-1,:,:]), dim = 1))
```

Which results to the shape

```
torch.Size([10, 100])
```

How can I do this concatenation without losing the first dimension of hidden_state, which is here 2 (1 x 2)?

You could use range indices to keep the dimension or simply `unsqueeze`

the result:

```
x = torch.randn([2, 10, 100])
res = torch.cat((x[0:1,:,:], x[1:2,:,:]), dim = 1)
```

Talita
May 18, 2020, 2:40pm
#3
Thanks! But that would still result to

```
torch.Size([1, 20, 100])
```

and not to `torch.Size([2, 20, 100])`

, right?

Yes, because you are concatenating in dim1.
I’m not sure what your use case if, but your desired output shape contains two times the number of sliced elements. Would you like to repeat some values or how would you like to achieve this shape?