Hi, I need to add padding to my input before the convolution layer. Earlier I was adding the padding manually myself but now I’m trying to add it using the padding attribute in nn.Conv2d and I’m getting some unexpected outputs.

Suppose my input is

sample = torch.randn(1, 1, 11, 128)

Now I need to do a convolution of window size 2 along the axis 2. Hence I did as below

conv = nn.Conv2d(1, 3, (2, 128))

conv(sample).size()

torch.Size([1, 3, 10, 1])

The output here made sense as there are 3 output channels and (11-2+1) = 10 is the dimension of axis 2

Now I’m trying to add a padding of size (1, 128) to the input before the convolution window. Hence I did as below

conv2 = nn.Conv2d(1, 3, (2, 128), padding=(1,128))

Now my understanding is that before convolution, padding of size (1,128) will be added on both sides. Hence new input size will be [1, 1, 11+1+1, 128] And convolution of size 2 will give 13-2+1=12 values and 3rd dimension of the output will have size 1. But instead I’m getting a different value

conv2(sample).size()

torch.Size([1, 3, 12, 257])

Why is the axis 3 size = 257 instead of 1? It will be great if someone can help me understand this.

Thank you