How to deal with non-squared image size in upsampling process

As mentioned, deep image segmentation methods havled the input spatial size gradually, and then upsampling the feature with scale factor=2 in deconv process. But it seems ambiguous when the input spatial size can’t be divided by 2 with no remainder. so the network output(more precisely ,the consecutive upsampling layer’s output) is always mismatch with input in spatial size.
To solve this problem,I noticed some method use deconv with factor=2 consecutively to enlarge the spatial size to a certain which near the input size, then apply a F.interpolate with arguments’ size ’ as the final layer to make sure the output size equals to input size exactly.
But I’m wondering: while training ,all input are resized to same spatial size.which means before the final F.interpolate,the feat size is already same as input ,so this function does nothing.but testing in real scenarios,this function will do resize the feature slightly
This must have some affect on layers’ weight. How to solve this problem? I found that deeplab v3’s sgementation code does not upsample the feature to such a certain size, but to 1/2 value then F.interpolate directly ,but this may causes the segmentation result not so precise, isn’t it?
Does anyone have some idea? How can I solve this problem?