# 1d conv primitive ops

Regular conv1d is basically $a_i^c = \sum_{k, c’} weight[c, c’, k] x_{i+k}^c’$, where x is the input with c' the input channel dim, and weight has dimensions [out_channel_dim, in_channel_dim, kernel_size].

The questions

1. Is there a way to do the convolution operation alone, i.e. without the summation over the channel dim c'?
2. I guess I could use the einsum for this, but is it going to be fast enough (like cuDNN fast)?

functional.conv1d does the actual convolution, summation over the channel dim and adds the bias… I’d need some more primitive ops for my crazy experiments!

You could use unfold as given in this example to calculate your manual convolution.
However, it should be slower, but could be a good enough for first exeriments.

Hmm yeah I guess that would work! unfold seems to indeed return a new tensor instead of a view, so I guess it can use quite a bit of memory (esp. for 2D and higher convolutions).

I’ll do some benchmarks vs. regular convolutions. Thanks @ptrblck!