Doubts about conv2d

Hello guys:
I have the following doubts:

Conv2d only allows add padding in tuple (height, width).
Set padding=(1,1) , does it means top=1, bottom=1, left=1 and right=1?

and the second one:

Is possible add specific padding in top, bottom, height, width?

thanks

Yes, you should be able to add a tuple of tuples here.

import torch
x = torch.randn(28, 28, 3)
layer = torch.nn.Conv2d(x, 32, kernel_size=3, stride=1, padding=((0,1),(0,1)))
out = layer(x)

It produces the error:
Traceback (most recent call last):
RuntimeError: bool value of Tensor with more than one value is ambiguous

I just tested this myself, and this is strange, because the here they say, that padding is applied before the actual convolution.

And torch.nn,ConstantPad2d accepts 4 values to specify the padding. As a workaround you could use the following:

class CustomConv(torch.nn.Module):
    def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0, **kwargs):
        super().__init__()
        self.conv = torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding=0, **kwargs)
        self.pad = torch.nn.ConstantPad2d(padding, 0)

    def forward(self, input_tensor):
        return self.conv(self.pad(input_tensor))

I’ll try to find out, why this is not supported…

@tom told me the following, which makes sense:

Usually the PyTorch API follows CuDNN. This is the case here, too, where CuDNN only offers symmetric 0 padding (as far as I know). I’ve not looked at the fallback cuda implementation, but unless it’s a lot more efficient (relative to “what’s possible”) than the CPU one, there just isn’t much point not split out the padding.

Thanks a lot @justusschock, it works. This helped me clear up my doubts.