How to convolve along a single axis?

I have two tensors, B and a predefined kernel.

>> torch.Size([6, 6, 1])

>> torch.Size([5])

In scipy it’s possible to convolve the tensor with the kernel along a single axis like:

convolve1d(B.numpy(),kernel.numpy(), axis=0,mode="constant")

mode="constant" refers to zero-padding.

torch.nn.functional.conv1d, however, doesn’t have a parameter to convolve along a single axis. Is it possible to mimic that behaviour of scipy?

Thank you in advance!

nn.Conv1d will use dim2 as the time axis and convolve (or rather cross-correlate) in this dimension.
Therefore, you would need to permute the dimensions in your input.
Here is a small example:

conv = nn.Conv1d(6, 1, kernel_size=5, padding=2)
input = torch.randn(6, 6, 1)
input = input.permute(2, 1, 0)
print(input.shape) # [batch_size, channels, seq]
> torch.Size([1, 6, 6])
output = conv(input)
1 Like

But the weight tensor then actually has the shape torch.Size([1, 6, 5]), while my kernel is just torch.Size([5]). Do you suggest to assign a repeated sequence of my kernel to the weights, such as

conv.weight = torch.nn.Parameter(kernel.repeat(6).view(1,6,5))

But also I expect an output of the same shape as the input, but in your example the output is of shape torch.Size([1, 1, 6]), which is different from the input shape torch.Size([1, 6, 6]). Above scipy function doesn’t change the shape.

The scipy function doesn’t change the shape because it uses padding, by default.
There is an optional padding option for torch.nn.Conv1d.

The shape of the weight tensor depends of the number of in_channels and out_channels.
In your example, can you clarify what does the size [6, 6, 1] of your tensor B correspond to ?

It’s actually [sequence_length, batch_size, value], and I’m trying to convolve the value along the sequence.

So you work in mono-channel.

In this case, try using B.permute(1, 2, 0) to obtain the desired shape:
[batch_size, in_channels, sequence_length] (see docs).

Then your kernel of size [5] needs to be [out_channel, in_channels, kernel_size], so it needs a kernel.view(1, 1, -1).

And to keep the same size, you need a padding of ((kernel_size - 1) / 2) pixels.

So for example, torch.nn.functional.conv1d(B.permute(1, 2, 0), kernel.view(1, 1, -1), padding=2)

1 Like

Thank you! :slight_smile: That did work (almost).
The kernel needed to be mirrored to work exactly like expected and like the scipy function.