Achieving desired size with 2D Transpose Convolution

Hi. I have a feature map with size [N, 64, 248, 216] and would like to upsample it using 2D Transpose Convolutions to the size of [N, 64, 496, 432] (double dims 2 and 3).
I have the following code extract:

feats = torch.randn([2, 64, 248, 216]).cuda()
decoder_block = nn.Sequential(nn.ConvTranspose2d(
        feats.size(1),
        64,
        3,
        stride=2,
        padding=1,
        bias=False),
        nn.BatchNorm2d(64, eps=1e-3, momentum=0.01),
        nn.ReLU(inplace=True),
    ).cuda()
decoded_feats = decoder_block(feats).cuda()

The output I get is [2, 64, 495, 431] instead of [2, 64, 496, 432]. How can I fix this? Changing the stride to 1 as in this issue does not fix this problem. I really appreciate any help you can provide.

I have the same problem for a similar use case where I want to convert a feature map with size [2, 64, 128, 400] to [2, 3, 512, 1600]. Using the following code, I get an output shape of [2, 3, 511, 1599]:

img_decoder_block = nn.Sequential(nn.ConvTranspose2d(
        img_feats.size(1),
        int(img_feats.size(1) // 2),
        3,
        stride=2,
        padding=1,
        bias=False),
        nn.ConvTranspose2d(
        int(img_feats.size(1) // 2),
        3,
        3,
        stride=2,
        bias=False),
        nn.BatchNorm2d(3, eps=1e-3, momentum=0.01),
        nn.ReLU(inplace=True),
        nn.Tanh(),
    ).cuda()
decoded_img = img_decoder_block(img_feats).cuda()

The solution for my problem was to use the output_padding parameter. Setting it to 1 resolves the ambiguity of the mapping of multiple input shapes to the same output shape.