Is asymmetric padding of style 'same' available in pytorch?

Problem as the title:

When the kernel size is even, we may need asymmetric padding. In Tensorflow we have a padding setting called ‘SAME’, which will make a padding with asymmetric padded pixels to different side of tensor to make the output size matches the input size.

Is there any such setting available in PyTorch?

Thanks.

2 Likes

I think there is no such thing as ‘SAME’ or ‘VALID’ as in TF/Keras when defining your convolution layer, instead you define your own padding with a tuple, as stated in the docs padding (int or tuple, optional) – Zero-padding added to both sides of the input for torch.nn.Conv2d.

Now that I realize, padding only accepts symmetric padding in width and height (a tuple of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension). Nonetheless you have many padding layers http://pytorch.org/docs/nn.html?highlight=padding#padding-layers for which you can indeed apply different paddings with 4-tuples.

1 Like

thanks for the hint. I will try to use padding layers before making convolutions or transposed convolutions. Hope that would solve my problem.

Thanks again.

I’m also interested in that topic. I think by combining asymmetric padding and conv2D, one can mimic ‘SAME’ in tensorflow for tflearn.layers.conv.conv_2d and tflearn.layers.conv.conv_2d_transpose of stride 1. But I couldn’t find a way to translate tflearn.layers.conv.conv_2d_transpose with asymmetric padding and stride > 1.
After looking at “Padding, strides, transposed (odd)” from https://github.com/vdumoulin/conv_arithmetic, I’m convinced that you can’t reproduce tflearn.layers.conv.conv_2d_transpose with asymmetric padding and stride > 1 using a cascade of asymmetric padding and convolutions in pytorch.
Anyone has a solution for this ?
Toy exemple :
Trying to translate to pytorch : tflearn.layers.conv.conv_2d_transpose(x, nb_filter = 1, filter_size = [5, 5] , output_shape = [6,8], strides = 2, activation = 'linear') with x.size = (1, 1, 3, 4) = (BATCH, INPUT_FILTERS, HEIGHT, WIDTH).

1 Like

Refer to this answer and calculate the padding needed for conv2D transpose of pytorch. It’ll enable you to convert between tflearn.layers.conv.conv_2d_transpose with non-unit stride to pytorch version of conv2D

Refer to torch.nn.ZeroPad, the asymmetric padding is now available by passing a tuple to the Pad function.
For example:

>>> input = torch.randn(1, 1, 3, 3)
>>> input
tensor([[[[-1.7800,  0.6112, -0.0166],
          [-2.1496, -0.5789,  0.8997],
          [-0.5621,  0.9050,  0.4039]]]])
>>> m = torch.nn.ZeroPad2d((1, 2, 1, 2))
>>> m
tensor([[[[ 0.0000,  0.0000,  0.0000,  0.0000,  0.0000,  0.0000],
          [ 0.0000, -1.7800,  0.6112, -0.0166,  0.0000,  0.0000],
          [ 0.0000, -2.1496, -0.5789,  0.8997,  0.0000,  0.0000],
          [ 0.0000, -0.5621,  0.9050,  0.4039,  0.0000,  0.0000],
          [ 0.0000,  0.0000,  0.0000,  0.0000,  0.0000,  0.0000],
          [ 0.0000,  0.0000,  0.0000,  0.0000,  0.0000,  0.0000]]]])
4 Likes

But it’s not without problems. According to here:
The padding in Conv2d layer is implemented as implicit padding, i.e. the convolution kernel itself is assuming that the given input is padded and doing the computation. There is no extra memory taken by the operation because of the padding value.

The F.pad layer does padding more explicitly, i.e. each input is padded and an output Tensor is returned. This involves extra memory.

3 Likes

AFAIK, it’s best not to use the functional API(F.pad) for adding layers, especially those that mutate the shape of their input. The layers won’t show up in the model description and will create a problem in inferring how the shape of the feature maps evolve as the network progresses.

torch.nn layers make life easier.(Refer @duang’s example above).