Dear All,
Im working on a simulation algorithm where the linear algebra is handled by pytorch
. One step in the algorithm is to do a 1d convolution of two vectors. This needs to happen many times and so it needs to be fast. I decided to try to speed things further by allowing batch processing of input. This means that I sometimes need to do a convolution of two matrices along the second dimension. I can get the 1d convolution to work with torch.conv1d
, but I cannot seem to figure out how to do it for the matrix case. I made a small example with code that does the same but that relies on a double for-loop which is not vectorized and will thus slow things down and is not very elegant (does give the expected result).
My question: how would I get the 1d convolution of two matrices to work with torch.conv1d
Example
import torch
na = 2
nv = 3
nbatch = 4
a1d = torch.randn(na)
v1d = torch.randn(nv)
def convolve(a, v):
if a.ndim == 1:
# this is the 1D case
padding = v.shape[-1] - 1
b = torch.conv1d(
input=a.view(1, 1, -1), weight=v.flip(0).view(1, 1, -1), padding=padding, stride=1
).squeeze()
return b
elif a.ndim == 2:
# this is the 2D case that is ugly!
nrows, vcols = v.shape
acols = a.shape[1]
expanded = a.view((nrows, acols, 1)) * v.view((nrows, 1, vcols))
noutdim = max(vcols, acols) + 1
b = torch.zeros((nrows, noutdim))
for i in range(acols):
for j in range(vcols):
b[:, i+j] += expanded[:, -i, j]
return b
else:
raise NotImplementedError
a2d = torch.cat([a1d[None,:], torch.randn((nbatch-1, na))])
v2d = torch.cat([v1d[None,:], torch.randn((nbatch-1, nv))])
b1d = convolve(a1d, v1d)
b2d = convolve(a2d, v2d)
print(b1d)
tensor([ 0.6887, 0.9372, -1.6958, -0.0101])
print(b2d) # notice that the first row matches that of b1d as expected
tensor([[ 0.6887, 0.9372, -1.6958, -0.0101],
[-0.0328, 0.9093, -0.6063, 0.4537],
[-0.2817, -0.9321, 1.0376, 1.4543],
[-2.8016, -1.6350, 1.2036, 0.3089]])