How to convert Keras' "same" padding for Conv2DTranspose in Pytorch

I have this piece of code from keras

tf.layers.conv2d_transpose(fcn8, filters=512,
    kernel_size=4, strides=(2, 2), padding='SAME', name="fcn9")

that I would like to convert to Pytorch. In keras, for the input image of size(4,4), it would yield the image of size (8,8). Without the “same” padding in Pytorch, the operation would give a (10,10) image. Is there anyway I can model this operation?

From what I know, Pytorch doesn’t support this as an inbuilt option, TensorFlow does. Checkout this discussion which mentions how dynamic loading makes it hard.
However, there could be ways to hack it by combining asymmtric padding layers with conv2d layers. I wouldn’t bother doing it, unless super useful and just go with the inbuilt padding options. More discussion here.

1 Like