Hello! I have been trying to understand the dynamics for a while but I cannot make sense of the error below:

RuntimeError: input.size(-1) must be equal to input_size. Expected 1, got 2

My training data contains only one feature column, therefore I pass “n_features” = 1. Labels can only be 0 or 1. Since this is binary classification, I pass “n_classes” = 1.

```
class ModuleLSTM(nn.Module):
def __init__(self, n_features, n_classes, n_hidden=256, n_layers=3):
super().__init__()
self.lstm = nn.LSTM(
input_size = n_features,
hidden_size = n_hidden,
num_layers = n_layers,
batch_first = True,
dropout = 0.75
)
self.classifier = nn.Linear(n_hidden, n_classes)
def forward(self, x):
self.lstm.flatten_parameters()
_, (hidden, _) = self.lstm(x)
return self.classifier(hidden[-1])
```

When I debug the ‘x’ in the forward method, I see that the shape is torch.Size([64, 5, 2]), where the 64 and 5 correspond to batch size and sequence length, respectively. I don’t understand why ‘x’ has the second dimension full of zeros below:

```
tensor([[[-4.3775e-01, 0.0000e+00],
[-4.7356e-01, 0.0000e+00],
[-4.9494e-01, 0.0000e+00],
[-5.2778e-01, 0.0000e+00],
[-5.5412e-01, 0.0000e+00]],
...
[[ 2.7826e+00, 0.0000e+00],
[ 2.7535e+00, 0.0000e+00],
[ 2.7076e+00, 0.0000e+00],
[ 2.6636e+00, 0.0000e+00],
[ 2.6562e+00, 0.0000e+00]]]
```