Is asymmetric padding of style 'same' available in pytorch?

But it’s not without problems. According to here:
The padding in Conv2d layer is implemented as implicit padding, i.e. the convolution kernel itself is assuming that the given input is padded and doing the computation. There is no extra memory taken by the operation because of the padding value.

The F.pad layer does padding more explicitly, i.e. each input is padded and an output Tensor is returned. This involves extra memory.

3 Likes