Hi all,
Sorry for the potential noob post, but I encountered today what is not strictly a problem, but rather an unexpected behavior.
When using a torch.nn.functional.conv1d()
, it is specified in the documentation that the input tensor shape should be of dimension [minibatch, in_channels, iW]. But it seems to be also working with a 4d input tensor provided your weights are defined accordingly:
import torch
import torch.nn.functional as Ftest = torch.ones(1,3,2,2)
weights = torch.randn(4,3,1,1)
F.conv1d(test, weights)
outputs
tensor([[[[-0.8317, -0.8317],
[-0.8317, -0.8317]],[[-0.7135, -0.7135], [-0.7135, -0.7135]], [[ 0.3278, 0.3278], [ 0.3278, 0.3278]], [[-1.8960, -1.8960], [-1.8960, -1.8960]]]])
That’s to say exactly the same as
F.conv2d(test, weights)
tensor([[[[-0.8317, -0.8317],
[-0.8317, -0.8317]],[[-0.7135, -0.7135], [-0.7135, -0.7135]], [[ 0.3278, 0.3278], [ 0.3278, 0.3278]], [[-1.8960, -1.8960], [-1.8960, -1.8960]]]])
In other words, it seems like PyTorch is switching to conv2d under the hood because of the input tensor and weights shape. Is that true? Is there any reason not to simply always use conv1d by default?
Thanks for your time.