Upsampling within a set of sizes dependent on the image in a nn.module

In a nn.module, for a model which fuses feature maps of different strides, I am wondering if it is possible to do nn.Upsample, not with the scale parameter, but with the size parameter.

This size parameter would be dependent on which level of feature map is being upsampled, as well as the input image size which you only see during the forward pass and not when the module is instantiated.

The reason not to use scale is that I would like to be able to work with images of any sizes, without having to resize to some powers of 2.

If you want to use a constant size, you could create the nn.Upsample module in the __init__ method of your model and directly use it for all images:

up = nn.Upsample(size=(24, 24))

x = torch.randn(1, 3, 10, 10)
print(up(x).shape)
> torch.Size([1, 3, 24, 24])

x = torch.randn(1, 3, 40, 40)
print(up(x).shape)
> torch.Size([1, 3, 24, 24])

On the other hand, if you want to use different upsampling layers depending on the current input shape, you could use a condition and either create the nn.Upsample layer in the forward (as it doesn’t have any parameters, this would be fine), or use the functional API via e.g. F.interpolate.
You could also create a list or dict of various nn.Upsample modules in the __init__ and select the appropriate one in the forward method.

1 Like