Shape requirements of conv1d seems very permissive

Hi all,

Sorry for the potential noob post, but I encountered today what is not strictly a problem, but rather an unexpected behavior.

When using a torch.nn.functional.conv1d(), it is specified in the documentation that the input tensor shape should be of dimension [minibatch, in_channels, iW]. But it seems to be also working with a 4d input tensor provided your weights are defined accordingly:

import torch
import torch.nn.functional as F

test = torch.ones(1,3,2,2)
weights = torch.randn(4,3,1,1)
F.conv1d(test, weights)

outputs

tensor([[[[-0.8317, -0.8317],
[-0.8317, -0.8317]],

     [[-0.7135, -0.7135],
      [-0.7135, -0.7135]],

     [[ 0.3278,  0.3278],
      [ 0.3278,  0.3278]],

     [[-1.8960, -1.8960],
      [-1.8960, -1.8960]]]])

That’s to say exactly the same as

F.conv2d(test, weights)
tensor([[[[-0.8317, -0.8317],
[-0.8317, -0.8317]],

     [[-0.7135, -0.7135],
      [-0.7135, -0.7135]],

     [[ 0.3278,  0.3278],
      [ 0.3278,  0.3278]],

     [[-1.8960, -1.8960],
      [-1.8960, -1.8960]]]])

In other words, it seems like PyTorch is switching to conv2d under the hood because of the input tensor and weights shape. Is that true? Is there any reason not to simply always use conv1d by default?

Thanks for your time.

I guess some checks might not be triggered for an unexpected shape as long as it’s not invalid and based on this code it seems that besides a different output_padding? argument the internal call would be equal.
I would still recommend to use the corresponding function, as internals could be changed without any notice and would thus not depend on the equal results in your code snippet.

1 Like