Should I permute the input's dimensions for the first layer, which is Conv1D?

The input layer shape I have used is (Batches X HistoricalDataSets X InputTypes) (e.g. 64x50x15)
By InputTypes, I mean different functions whose values I’m going to use as an input
By HistoricalDataSets, I mean most recent set of values from the different functions.

What shape is best for the first layer of Conv1D to work with? Should I permute it so that it is (Batches X Input Types X HistoricalDataSets) ?

I use 2 layers of Conv1D + MaxPooling + ReLU

Conv1d layers are typically used in sequences. By sequence, I mean where consecutive values carry information in their positional relationship(i.e. if you shuffled the order around, you’d lose important information). The third dim should be the sequence length.

The second dim should be used for channels. By channel, I mean that you could reorder channels and not lose relevant information because they do not have a positional relationship with one another.

Each trainable kernel will convolve along the sequence dim, and separate kernels will be used to convolve each channel.

1 Like

Thank you
So input_channels=input_types…
Strangely, compared to before, the initial training and testing score is worse, however it takes way more actions instead of doing nothing. I hope for the best