output_size
is used to get the right output_padding
as seen in this line of code.
However, as you can see, this method is not exposed, so you could just define the output_padding
manually or use this hacky way to invoke the method:
conv = nn.ConvTranspose2d(3, 1, 2, 2, bias=False)
x = torch.randn(1, 3, 10, 10)
# vanilla
output_vanilla = conv(x)
print(output.shape)
> torch.Size([1, 1, 20, 20])
# output_size
output_size = conv(x, output_size=(21, 21))
print(output.shape)
> torch.Size([1, 1, 21, 21])
# functional API
weight = conv.weight.detach().clone()
output_func = F.conv_transpose2d(x, weight, stride=2)
print(output_func.shape)
> torch.Size([1, 1, 20, 20])
print((output_func-output_vanilla).abs().max())
> tensor(0., grad_fn=<MaxBackward1>)
# hacky way to get output padding
output_padding = nn.ConvTranspose2d._output_padding(
self=None,
input=x,
output_size=(21, 21),
stride=(2, 2),
padding=(0, 0),
kernel_size=(2, 2)
)
output_func_size = F.conv_transpose2d(
x, weight, stride=2, output_padding=output_padding)
print(output_func_size.shape)
> torch.Size([1, 1, 21, 21])
print((output_func_size-output_size).abs().max())
> tensor(0., grad_fn=<MaxBackward1>)
I’m not sure, why _output_padding
is wrapped in a class method and not exposed publicly, so I would rather recommend to calculate the output_padding
argument manually without relying on this hacky approach.