Is there a way in Pytorch that has the same functionality as padding 'same' in TensorFlow?

I’m trying to match the input shape with the output shape, but since kernel_size is (4, 3, 3), it’s hard to match the output shape with the input shape.

For example,
input_shape = [5, 256, 4, 64, 64]
kernel_size = (4, 3, 3)

Applying kernel_size to inpus_shape gives me this:
output_shape = [5, 128, 1, 62, 62]

Even if you apply multiple paddings to the above output, I have a hard time getting the same size as the input_shape.

However, in TensorFlow or Keras code, padding = ‘same’ makes it easy to scale input and output.

How can I implement padding=‘same’ in Pytorch?

Hi woals2274,
The pytorch conv modules apply only symmetric padding to the input, thus can’t be used to create ‘same’ padding for even sized kernels.
You could instead apply unsymmetric padding to the input using the following code

import torch
input_data = torch.rand(5,256,4,64,64)
pad = torch.nn.ConstantPad3d((1,1,1,1,2,1), 0)
conv = torch.nn.Conv3d(256, 128, kernel_size=(4,3,3), padding=0, stride=1)
print("Input Data size:{}\nOutput Data size:{}".format(conv(pad(input_data)).size()))