Good day, I am trying to build a seq2seq LSTM model but get the following error from the code snippet of my decoder:

```
print("entered")
print(h0.shape)
output, (h,c) = self.lstm(seq_embed, h0)
output_pred = self.fc(output)
ans = output_pred.argmax(-1)
```

Where h0 is simply obtained from my encoder through:

```
output, (h,c) = self.lstm(seq_embed, h0)
```

which yields the following when printed:

```
entered
torch.Size([1, 13, 128])
```

It complains about my h0 tensor being of the wrong size but the from the printed outputs below my dimensions seem to be correct so not sure how to proceed further?

Upon investigating some of the other errors that are displayed I came upon this section below. My 2nd question is does my h0 tensor require two “layers” for hidden[0] and hidden[1] which would essentially require it to be a (2,13,128)? Any assistance would be apprecriated

```
--> 533 self.check_hidden_size(hidden[0], expected_hidden_size,
534 'Expected hidden[0] size {}, got {}')
535 self.check_hidden_size(hidden[1], expected_hidden_size,
536 'Expected hidden[1] size {}, got {}')
```