Complicated conv1d in parallel

Hi,
I’m been trying to use the torch.nn.functional.conv1d function to perform multiple convolutions on the same input in parallel using the groups=... parameter - so far to no avail. Unlike other questions I’ve seen answered, I intend to perform these operations on the entire batch (i.e. not 1 kernel per 1 item in batch). I think it will be best explained in code:

# Data Shape
n_batch = 7
n_in_channels = 5
n_time = 17
data = torch.rand(n_batch, n_in_channels, n_time)

# Convolution shape
n_convs = 11
n_out_channels = 3
kernel_size = 13
conv_weights = torch.rand(n_convs, n_out_channels, n_in_channels, kernel_size)
bias_weights = torch.rand(n_convs, n_out_channels)

Noticeably, in my case one can assume that the padding, stride and dilation parameters are trivially set.

The output which I wish to compute In parallel can be computed using a loop in the following code:

# Out-data shape
out_data = torch.empty(n_batch, n_convs, n_out_channels, n_time)

for i in range(n_convs):
    out_data[:, i, :] = torch.conv1d(data, conv_weights[i], bias_weights[i])

I’ve been trying to use the groups parameter, but as far as I can see, it required me to copy the input data several times.
I also tried to use the conv_transpose1d (at least for the special case where kernel_size=1), but also without success.

Is it possible?
Thanks!

I think increasing the number of output channels would just work unless I miss some details.

Your code is not executable as out_data has a wrong shape in dim3. After fixing it, a simple conv seems to work:

# Data Shape
n_batch = 7
n_in_channels = 5
n_time = 17
data = torch.rand(n_batch, n_in_channels, n_time)

# Convolution shape
n_convs = 11
n_out_channels = 3
kernel_size = 13
conv_weights = torch.rand(n_convs, n_out_channels, n_in_channels, kernel_size)
bias_weights = torch.rand(n_convs, n_out_channels)

# Out-data shape
out_data = torch.empty(n_batch, n_convs, n_out_channels, 5)

for i in range(n_convs):
    out_data[:, i, :] = torch.conv1d(data, conv_weights[i], bias_weights[i])
    

res = torch.conv1d(data, conv_weights.view(-1, n_in_channels, kernel_size), bias_weights.view(-1))
print((res - out_data.view_as(res)).abs().max())
# tensor(0.)

Thank you very much. That’s embarrassingly simple.