I am trying to use Conv1d and LSTM layers together. Output of conv1d layer is [8, 32, 10] which is form of Batch x Channel x Seq. Len. I can not give this output to LSTM layer directly. When I use permute function and replace sequence length with channel, training process works correctly. But is it the correct way to map conv output to sequence?