Conv1D kernel size explained

In the doc for Conv1D, kernel size is described as

kernel_size ([ int ] or [ tuple ])

Can someone explain how kernel size being tuple makes sense? It made sense in Conv2D as the kernel is 2 dimensional (height and width).

1 Like

Hi,

One difference I can mention is that you cannot pas 3D tensor to Conv2d. For instance for sound signals in shape of [batch, channels, timestap], conv2d does not work and the only choice is conv1d. But you use 2d kernel size (a tuple) for conv1d, it will act in the same way conv2d does. For instance, when you use a tuple for kernel size in conv1d, it forces you to use a 4D tensor as the input. Here is an example that produces same values:

torch.manual_seed(0)
x = torch.ones((1,1, 2, 2))
c = nn.Conv1d(1, 1, (1, 1))
c.weight = nn.Parameter(torch.tensor([[[[0.5]]]]))
c.bias = nn.Parameter(torch.tensor([0.]))
c(x) 
 
###  output
tensor([[[[0.5000, 0.5000],
          [0.5000, 0.5000]]]], grad_fn=<MkldnnConvolutionBackward>)
###

---
cc = nn.Conv2d(1, 1, (1, 1))
cc.weight = nn.Parameter(torch.tensor([[[[0.5]]]]))
cc.bias = nn.Parameter(torch.tensor([0.]))
cc(x)

###
tensor([[[[0.5000, 0.5000],
          [0.5000, 0.5000]]]], grad_fn=<MkldnnConvolutionBackward>)
###

Actually, I could not find any information that why is that (so far!), but I think based on this definition of _ConvNd, torch will treat different input tensors regarding their input dimensons. So, if you pass 3D input, it will call Conv1d, if pass 4D, will call Conv2d and so on.
In the line below, it expands weight parameter depending on input size and weight size is determined by kernel. I think this s the reason.

Bests

1 Like

I am still a little confused. So Conv1D with 2D kernel is essentially Conv2D?

I think so, I ran few experiments (as I could not interpret more from source code) and it seems I was correct about _ConvNd idea. Here is some images of my experiments, although there might be wrong or not adequate but based on source code in previous post, I think it is true.

Note that because of power limitations, I have used same config for both Conv1d and Conv2d and only changed input size to have bigger dimensions which you can as the title of each graph. Although, in case of cuda, I increased batch size to get more reliable values.

Results on CPU:

###############
Results on GPU:

What I found is that in small tensors, Conv2d works faster but in bigger tensors they both do the same but Conv2d is always a little bit faster.

I hope it helps

1 Like

Your code doesn’t run:

torch.manual_seed(0)
x = torch.ones((1,1, 2, 2))
print(x)
c = nn.Conv1d(1, 1, (1, 1))
c.weight = nn.Parameter(torch.tensor([[[[0.5]]]]))
c.bias = nn.Parameter(torch. Tensor([0.]))
c(x)
###
tensor([[[[1., 1.],
          [1., 1.]]]])
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_5860\1401148504.py in <module>
      5 c.weight = nn.Parameter(torch.tensor([[[[0.5]]]]))
      6 c.bias = nn.Parameter(torch.tensor([0.]))
----> 7 c(x)

d:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
   1192         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1193                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194             return forward_call(*input, **kwargs)
   1195         # Do not call functions when jit is used
   1196         full_backward_hooks, non_full_backward_hooks = [], []

d:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\conv.py in forward(self, input)
    311 
    312     def forward(self, input: Tensor) -> Tensor:
--> 313         return self._conv_forward(input, self.weight, self.bias)
    314 
    315 

d:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\conv.py in _conv_forward(self, input, weight, bias)
    307                             weight, bias, self.stride,
    308                             _single(0), self.dilation, self.groups)
--> 309         return F.conv1d(input, weight, bias, self.stride,
    310                         self.padding, self.dilation, self.groups)
    311 

RuntimeError: Expected 2D (unbatched) or 3D (batched) input to conv1d, but got input of size: [1, 1, 2, 2]

pytorch version 1.7.1 is ok,2.0version make mistakes.