I usually read from some paper where the network architecture figure shows some hyperparameters.
They use transposed convolution to enlarge the size of feature map. And usually one layer will enlarge the feature map with scale-factor=2.
But the hyperparameters that figure shows:
Under my calculation, it can’t enlarge with 2. It turns the feature map from n to 2n+1-2padding, there is no padding can correct this output size to 2n.
Is there something wrong? I follow the formula on pytorch documentation