What’s the difference between padding and output padding in torch.nn.ConvTranspose2d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros')
I am trying it to apply to a input of 256×32×32 to obtain output of 256×64×64 with filter size 2*2 and stride 2 but couldn’t understand the role of different padding here.