But my question is: are there any reasons why one cannot use the same method to downscale images if they are bigger than the required input size to some model?
Which seems to work fine and looks sensible but does this make sense from a mathematical standpoint or will this method be introducing problems somehow in the background?
I believe they changed the name of Upsample to interpolate to not confuse people. It feels weird that upsample can create smaller images Check out interpolate docs
Welcome I don’t think the upscale ever did transposed convolution. I’m guessing it just interpolated to upscale but could be wrong. There is something called ConvTranspose2d for that now though.
Yeah, it certainly only does interpolation without any learning. You can choose the interpolation mode. If you need learnable interpolation, conv(transpose) or the grid_sample (maybe along with the affine_grid fn) could be a way to go.
That’s okay then, I’ve been using ConvTranspose2d (as mentioned by @Oli) for upsampling in a learnable fashion in networks but for simple rescaling of images before feeding them to a model interpolation seems to be fine.