Padding strategy for nn.ConvTranspose2d

In tensorflow, there are padding strategy like ‘SAME’, ‘VALID’.
However, in pytorch, we should specify the padding for each dimension. Does anyone know how to to different padding strategy for nn.ConvTranspose2d

The output shape of the ConvTranspose2d layer is explained in the docs.
You would have to calculate the appropriate padding for your use case.

2 Likes

hi @ptrblck,

here’s what I want to do:
I have a tensor with shape (128, 3, 32, 32) and I want to apply deconv on it and the output size to be (128, 64, 32, 32)
with kernel size=4, stride=1
If I calculate the padding size with the formula here
I will get 2*padding = 3

then how should I define the padding number?

Are you sure you need a ConvTranspose layer for this?
It seems, that your width and height should not change in the output.
You could just use a Conv2d layer with e.g. kernel_size=3 and padding=1.

EDIT: Using kernel_size=4 is a bit tricky. Since the filter size is even, you will end up with an output size of 31 or 33.

2 Likes