when we have a kernel with size that would result in an “odd” padding, which side would the larger padding standardly be applied?
The simplest case is kernel_size = 2. Consider a 1D input (X Y Z) of length 3. If we apply same padding, we would have to add a pad on either the left or the right side of the input:
P X Y Z > (PX, XY, YZ)
X Y Z P > (XY, YZ, ZP)
The right hand side shows the sets of datapoints from the input XYZ that are received by the size 2 kernel in each stride step (where stride=1). “P” is the padding entry.
So, would P be on the left or the right side by convention?
Let’s perform your experiment with some pre-defined kernel values as
[[[0., 1.]]] which would use the “right” pixel value of the current patch:
conv = nn.Conv1d(in_channels=1, out_channels=1, kernel_size=2, stride=1, padding='same', bias=False)
x = torch.randn(1, 1, 3)
# tensor([[[ 0.7866, 1.1084, -1.3647]]])
out = conv(x)
# tensor([[[ 1.1084, -1.3647, 0.0000]]], grad_fn=<ConvolutionBackward0>)
Based on this the right-hand side is padded.
Yes, this makes sense. Haven’t yet got the full experience with pytorch to come up with these handy small examples (just starting for three weeks or so), but will will note it down to answer these questions myself, in the future. Thanks for taking the time!