Using LSTM after Conv1D for Time Series Data

I am not able to understand exactly what input needs to be given to the LSTM layer. It expects a state computed from before but I do not have these states. How do I proceed with the forward function? For now, this is the model and it gives me an error that says:
AttributeError: 'tuple' object has no attribute 'size'.

class ConvLSTM(nn.Module):
    def __init__(self):
        super().__init__()
        
        self.conv1 = nn.Conv1d(8, 16, kernel_size=8)
        self.conv2 = nn.Conv1d(16, 32, kernel_size=8)
        self.conv3 = nn.Conv1d(32, 64, kernel_size=8)
        
        self.bn1 = nn.BatchNorm1d(64)
        
        self.conv4 = nn.Conv1d(64, 64, kernel_size=8)
        self.conv5 = nn.Conv1d(64, 128, kernel_size=8)
        
        self.bn2 = nn.BatchNorm1d(128)

        self.lstm1 = nn.LSTM(12, 100)
        self.lstm2 = nn.LSTM(100, 128)
        
        self.fc1 = nn.Linear(128, 64)
        self.fc2 = nn.Linear(64, 32)
        self.fc3 = nn.Linear(32, classes)
        
    def exec_conv_block(self, x):
        x = F.relu(self.conv1(x))
        x = F.relu(self.conv2(x))
        x = F.relu(self.conv3(x))
        
        x = F.max_pool1d(x, 2)
        x = self.bn1(x)
        
        x = F.relu(self.conv4(x))
        x = F.relu(self.conv5(x))
        
        x = F.max_pool1d(x, 2)
        x = self.bn2(x)
               
        return x
    
    def forward(self, x):
        x = self.exec_conv_block(x)

        x, state = self.lstm1(x)
        x, _ = self.lstm2(x, state)
        
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        
        return x

The nn.LSTM module expects inputs as:

  • input of shape (seq_len, batch, input_size): tensor containing the features of the input sequence. The input can also be a packed variable length sequence. See torch.nn.utils.rnn.pack_padded_sequence() or torch.nn.utils.rnn.pack_sequence() for details.
  • h_0 of shape (num_layers * num_directions, batch, hidden_size): tensor containing the initial hidden state for each element in the batch. If the LSTM is bidirectional, num_directions should be 2, else it should be 1.
  • c_0 of shape (num_layers * num_directions, batch, hidden_size): tensor containing the initial cell state for each element in the batch.If (h_0, c_0) is not provided, both h_0 and c_0 default to zero.

as given in the docs.
Also note, that the default input shapes have the temporal dimension in dim0, so you might want to permute the x tensor coming from exec_conv_block. Alternatively, you could also use batch_first=True in the LSTM creation, which would then expect the input in the shape [batch_size, seq_len, nb_features].
The hidden and cell stats are expected in the shape [num_layers*num_directions, batch_size, hidden_size], so your current workflow of passing the states from lstm1 to lstm2 won’t work, since the hidden_size is different.
If you don’t pass the hidden states to the module, it’ll initialize them with zeros.

Thank you for this! Really new to PyTorch and this helped.