How to combine several lstm networks into one model?

I am trying to create three separate LSTM networks, and then merge them together into one big model. From my understanding I can create three lstm networks and then create a class for merging those networks together. Is that correct?
I am kind of new to this.

Yes, that should be possible as you can freely concatenate the outputs of several modules and pass it to a new module.

I have tried to code this, but this is as far as I got. I am not sure how to determine the input and output dimension of concat_layer. Also, what goes in the forward function under lstms?
Any help is greatly appreciated.

class Net(torch.nn.Module):
    def __init__(self):

        super(Net, self).__init__()
        self.lstm1 = torch.nn.LSTM(input_size = 1, hidden_size = 2, num_layers = 1, batch_first = True)
        self.lstm2 = torch.nn.LSTM(input_size = 1, hidden_size = 2, num_layers = 1, batch_first = True)
        self.lstm3 = torch.nn.LSTM(input_size = 1, hidden_size = 2, num_layers = 1, batch_first = True)
        self.concat_layer = torch.nn.Linear(?, ?)
        self.linear = torch.nn.Linear(,1)

    def forward(self, x):

        lstm1 = ?
        lstm2 = ?
        lstm3 = ?
        concat = torch.cat((lstm1, lstm2, lstm3), dim=1)
        output = self.linear(concat)
        return output

The LSTM docs explain the expected input with their shapes.

Once your input works, you could add a print statement in the forward method to check the shape of the concatenated tensor and adapt the in_features accordingly.