We know that any convolution operations could be divided into three operations: unfold → matmul -->fold, does not matter this convolution is conv1d or conv2d or conv3d.
However, existing fold() and unfold() APIs allow 4D tensors only.
Maybe I can implement conv3d and conv1d equivalent fold() and unfold() operations of my own using openBLAS, CUDA or cuBLAS … But I believe pytorch implementations would be the most efficient and qualified.
This post uses the more flexible tensor.unfold
method to compute the convolution and you could use it as a template for a 1d or 3d conv layer.