Hi. I have a feature map with size [N, 64, 248, 216] and would like to upsample it using 2D Transpose Convolutions to the size of [N, 64, 496, 432] (double dims 2 and 3).
I have the following code extract:
feats = torch.randn([2, 64, 248, 216]).cuda()
decoder_block = nn.Sequential(nn.ConvTranspose2d(
feats.size(1),
64,
3,
stride=2,
padding=1,
bias=False),
nn.BatchNorm2d(64, eps=1e-3, momentum=0.01),
nn.ReLU(inplace=True),
).cuda()
decoded_feats = decoder_block(feats).cuda()
The output I get is [2, 64, 495, 431] instead of [2, 64, 496, 432]. How can I fix this? Changing the stride to 1 as in this issue does not fix this problem. I really appreciate any help you can provide.