Conv1D kernel size explained

In the doc for Conv1D, kernel size is described as

kernel_size ([ int ] or [ tuple ])

Can someone explain how kernel size being tuple makes sense? It made sense in Conv2D as the kernel is 2 dimensional (height and width).

1 Like

Hi,

One difference I can mention is that you cannot pas 3D tensor to Conv2d. For instance for sound signals in shape of [batch, channels, timestap], conv2d does not work and the only choice is conv1d. But you use 2d kernel size (a tuple) for conv1d, it will act in the same way conv2d does. For instance, when you use a tuple for kernel size in conv1d, it forces you to use a 4D tensor as the input. Here is an example that produces same values:

torch.manual_seed(0)
x = torch.ones((1,1, 2, 2))
c = nn.Conv1d(1, 1, (1, 1))
c.weight = nn.Parameter(torch.tensor([[[[0.5]]]]))
c.bias = nn.Parameter(torch.tensor([0.]))
c(x) 
 
###  output
tensor([[[[0.5000, 0.5000],
          [0.5000, 0.5000]]]], grad_fn=<MkldnnConvolutionBackward>)
###

---
cc = nn.Conv2d(1, 1, (1, 1))
cc.weight = nn.Parameter(torch.tensor([[[[0.5]]]]))
cc.bias = nn.Parameter(torch.tensor([0.]))
cc(x)

###
tensor([[[[0.5000, 0.5000],
          [0.5000, 0.5000]]]], grad_fn=<MkldnnConvolutionBackward>)
###

Actually, I could not find any information that why is that (so far!), but I think based on this definition of _ConvNd, torch will treat different input tensors regarding their input dimensons. So, if you pass 3D input, it will call Conv1d, if pass 4D, will call Conv2d and so on.
In the line below, it expands weight parameter depending on input size and weight size is determined by kernel. I think this s the reason.

Bests

1 Like

I am still a little confused. So Conv1D with 2D kernel is essentially Conv2D?

I think so, I ran few experiments (as I could not interpret more from source code) and it seems I was correct about _ConvNd idea. Here is some images of my experiments, although there might be wrong or not adequate but based on source code in previous post, I think it is true.

Note that because of power limitations, I have used same config for both Conv1d and Conv2d and only changed input size to have bigger dimensions which you can as the title of each graph. Although, in case of cuda, I increased batch size to get more reliable values.

Results on CPU:

###############
Results on GPU:

What I found is that in small tensors, Conv2d works faster but in bigger tensors they both do the same but Conv2d is always a little bit faster.

I hope it helps

1 Like