Same padding equivalent in Pytorch

I have a layer with an input of

torch.Size([64, 32, 100, 20])

In Keras I was using this

conv_first1 = Conv2D(32, (4, 1), padding="same")(conv_first1)

which lead to an output shape the same as an the input shape

If I use the below in pytorch I end up with a shape of 64,32,99,20

self.conv2 = nn.Conv2d(32, 32, (4, 1), padding=(1,0))

and If I instead use padding (2,0) it becomes 64,32,101,20

What should be used in order to end up with
input_shape == output_shape
64,32,100,20 = 64,32,100,20

2 Likes

Hi,

PyTorch does not support same padding the way Keras does, but still you can manage it easily using explicit padding before passing the tensor to convolution layer. Here, symmetric padding is not possible so by padding only one side, in your case, top bottom of tensor, we can achieve same padding.

import torch
import torch.nn.functional as F

x = torch.randn(64, 32, 100, 20)
x = F.pad(x, (0, 0, 2, 1))  # [left, right, top, bot]
nn.Conv2d(32, 32, (4, 1))(x).shape

Please see these post about calculations Converting tensorflow model to pytorch: issue with padding

Bests

4 Likes

Is there some way to know whether the equivalent to keras is
x = F.pad(x, (0, 0, 2, 1)) # [left, right, top, bot] or
x = F.pad(x, (0, 0, 1, 2)) # [left, right, top, bot]

1 Like

Based on this issue (unfortunately links to docs are broken due to TF2 update), it seems it uses second approach.

x = F.pad(x, (0, 0, 1, 2)) # [left, right, top, bot]

code for TF same padding:

in_height, in_width = 100,20
filter_height, filter_width = 4,1
strides=(None,1,1)
out_height = np.ceil(float(in_height) / float(strides[1]))
out_width  = np.ceil(float(in_width) / float(strides[2]))

#The total padding applied along the height and width is computed as:

if (in_height % strides[1] == 0):
  pad_along_height = max(filter_height - strides[1], 0)
else:
  pad_along_height = max(filter_height - (in_height % strides[1]), 0)
if (in_width % strides[2] == 0):
  pad_along_width = max(filter_width - strides[2], 0)
else:
  pad_along_width = max(filter_width - (in_width % strides[2]), 0)

print(pad_along_height, pad_along_width)
  
#Finally, the padding on the top, bottom, left and right are:

pad_top = pad_along_height // 2
pad_bottom = pad_along_height - pad_top
pad_left = pad_along_width // 2
pad_right = pad_along_width - pad_left

print(pad_left, pad_right, pad_top, pad_bottom)
4 Likes

@Nikronic saved my day. Thanks!

it is compulsory to do padding before the conv layer or can we also feed the appropriate padding value to the conv layer
thx

It worked for me…!

class Conv2dSamePadding(nn.Conv2d):
    def __init__(self,*args,**kwargs):
        super(Conv2DTF, self).__init__(*args, **kwargs)
        self.zero_pad_2d = nn.ZeroPad2d(functools.reduce(operator.__add__,
                  [(k // 2 + (k - 2 * (k // 2)) - 1, k // 2) for k in self.kernel_size[::-1]]))

    def forward(self, input):
        return  self._conv_forward(self.zero_pad_2d(input), self.weight, self.bias)

Gist: PyTorch Conv2d equivalent of Tensorflow tf.nn.conv2d(....,padding='SAME') · GitHub

1 Like

this was solved since Pytorch 1.10.0
“same” keyword is accepted as input for padding for conv2d

6 Likes

@yuri One detail I’ve noticed is padding='same' does not export from PyTorch 1.10.0 to CoreML 5.1.0. For others interested, I came up with the following solution

import collections
from itertools import repeat
import torch
from torch import nn
import torch.nn.functional as F


def _ntuple(n):
    """Copy from PyTorch since internal function is not importable

    See ``nn/modules/utils.py:6``
    """
    def parse(x):
        if isinstance(x, collections.abc.Iterable):
            return tuple(x)
        return tuple(repeat(x, n))

    return parse


_pair = _ntuple(2)


class Conv2dSame(nn.Module):
    """Manual convolution with same padding

    Although PyTorch >= 1.10.0 supports ``padding='same'`` as a keyword
    argument, this does not export to CoreML as of coremltools 5.1.0, 
    so we need to implement the internal torch logic manually. 

    Currently the ``RuntimeError`` is
    
    "PyTorch convert function for op '_convolution_mode' not implemented"
    """

    def __init__(
            self,
            in_channels, 
            out_channels, 
            kernel_size,
            stride=1,
            dilation=1,
            **kwargs):
        """Wrap base convolution layer

        See official PyTorch documentation for parameter details
        https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html
        """
        super().__init__()
        self.conv = nn.Conv2d(
            in_channels=in_channels,
            out_channels=out_channels,
            kernel_size=kernel_size,
            stride=stride,
            dilation=dilation,
            **kwargs)

        # Setup internal representations
        kernel_size_ = _pair(kernel_size)
        dilation_ = _pair(dilation)
        self._reversed_padding_repeated_twice = [0, 0]*len(kernel_size_)

        # Follow the logic from ``nn/modules/conv.py:_ConvNd``
        for d, k, i in zip(dilation_, kernel_size_, 
                                range(len(kernel_size_) - 1, -1, -1)):
            total_padding = d * (k - 1)
            left_pad = total_padding // 2
            self._reversed_padding_repeated_twice[2 * i] = left_pad
            self._reversed_padding_repeated_twice[2 * i + 1] = (
                    total_padding - left_pad)

    def forward(self, imgs):
        """Setup padding so same spatial dimensions are returned

        All shapes (input/output) are ``(N, C, W, H)`` convention

        :param torch.Tensor imgs:
        :return torch.Tensor:
        """
        padded = F.pad(imgs, self._reversed_padding_repeated_twice)
        return self.conv(padded)
2 Likes

I tried different methods for creating ‘same’ padding from basic, with same architecture and same data set with same pre-processing, this method works like as ‘same’ padding.

and, ‘same’ padding in cnn makes issue when converting from .pth to .onnx format, using this method, will solve that issue

for reference -