What are the best practices for Multi-Sequence RNNs?

So I have the input sequence in the shape of (17,240,512) where the dimensions represented sequence_index, sequence_length and num_features respectively. I dont want to treat these 17 sequences as one whole sequence and squeeze it into an LSTM by merging the first and the third dimensions, so I have to write some very repulsive code:

self.LSTMs = nn.ModuleList([nn.LSTM(512,512)])
out_tensor = torch.zeros(x.shape)
for i in range(x.shape[1]):
    out_tensor[:,i,:,:] = self.LSTMs[i](x[:,i,:,:])

However, since LSTMs are running sequentially, I personally think that different LSTMs in this case can be ran in parallel for a quite significant speedup, so is there some kind of layer (Bagging layer or something similar) or custom code that can make me achieve this?