Hello,
so, imagine we have a 2d matrix input of shape m rows x n columns to a conv2d layer. My goal is to preserve the number of rows of the matrix while running the same kernel along each row.
In more detail, say, we apply a conv2d layer with a kernel (1,k) to the matrix, then, the layer would in my understanding learn one kernel of size (1,k) with individual parameters per row (and per feature map), resulting in kernel k1, …, km. Instead, what i would like to do is to share the parameters of a single kernel of size (1,k) across all rows. In other words, the same kernel is run along each row and produces and output “at the end of the row”. These outputs will be further transformed and fed to the loss function, where they ultimately result in a gradient that we can backpropagate to update the k parameters of the kernel which has been jointly run over each individual row in the matrix.
The idea behind this approach is to (i) make the training more efficient, as more parameters are shared (ii) learn a very generalized kernel that can understand each of the rows and learns their features jointly. Importantly, dont imagine my input matrix as being a picture (for which this approach would probably not make sense). Rather imagine each row represents a series of features and all the series are highly correlated amongst each other (/across the columns). Then, the general kernel should be able to apply to all rows simulatenously.
The problem: I have no idea how to modify the conv2d such that it makes use of only a single (1,k) that is used in each row. Can somebody give me a hint here?
Thanks!
Best, JZ