1D Convolution using 2D Convolution

Here is a problem I am currently facing. I have a model that uses 1D convolution that I would like to export to ONNX and use pytorch/glow or tvm to compile. But both projects currently do not support 1D convolution (see pytorch/glow/issues/3246 and tvm/issues/4587. A workaround that has been suggested (see an example) is to use 2D Convolution with height 1 for 1D convolution

Has anyone done this before using PyTorch? The PyTorch documentation for Conv2d suggests it would be possible to have “non-square kernels and unequal stride and with padding and dilation”. It seems I can have a wrapper class for Conv1d that uses Conv2d underneath. Is there any downside to it? Does math checks out? Also it looks like I have to do the same for BatchNorm1d and pooling.

Yes ofc. Think of a convolution as an sliding window. You can slide the kernel through a 1d signal right? You can think of a 2d convolution as an spatial kernel right? If one of the dimensions is 1 then it’s the same than a 1d convolution.
In fact you can do a sanity check which is applying a 1d convolution and use its weights to emulate the same result with a 2d convolution.