Padding=1 and "same" give different result

Based on what I know, in the Conv2D, padding has two value: 0 and 1. 0 equals to “valid” which is no padding while 1 equals to “same” which means add 0 as padding and make the output size the same as input size.However, when I tried using ''valid" and “same” in a 2D convolutional layer passing a input (36464), I found their output sizes are the same. In theory, I think the output size with no padding should be smaller than 6464? I also when I set the parameter to 1, the output size becomes 6666. I think this looks very strange to me. Does anyone have any idea?

sample_torch = sample.reshape(1,3,64,64)
conv1 = nn.Conv2d(3, 64, 1, stride=1, padding=0).float()
y = conv1(torch.tensor(sample_torch).float())
print(y.shape)
conv2 = nn.Conv2d(3, 64, 1, stride=1, padding=1).float()
y = conv2(torch.tensor(sample_torch).float())
print(y.shape)
conv1 = nn.Conv2d(3, 64, 1, stride=1, padding="valid").float()
y = conv1(torch.tensor(sample_torch).float())
print(y.shape)
conv2 = nn.Conv2d(3, 64, 1, stride=1, padding="same").float()
y = conv2(torch.tensor(sample_torch).float())
print(y.shape)

This is the output

torch.Size([1, 64, 64, 64])
torch.Size([1, 64, 66, 66])
torch.Size([1, 64, 64, 64])
torch.Size([1, 64, 64, 64])

That’s not generally true since the padding argument accepts different values and the “same” or “valid” option also depends on the kernel size, the stride, and other arguments.
In your use case you are using a kernel size of 1 which yields the same output size without any padding.

2 Likes