nn.Upsample behavior change from 1.1.0 to 1.9.0

Hi everyone,

I had an old image super resolution model trained using PyTorch 1.1.0. When I test it using PyTorch 1.9.0, I found that the result changed.

Found that the output of nn.Upsample(scale_factor=2, mode='bicubic') has changed from 1.1.0 to 1.9.0.

Example here:

>>> input_3x3
tensor([[[[1., 2., 0.],
          [3., 4., 0.],
          [0., 0., 0.]]]])
>>> m = nn.Upsample(scale_factor=2, mode='bicubic')

In 1.1.0:


>>> m(input_3x3)
tensor([[[[ 1.0000,  1.6875,  2.0000,  1.0938,  0.0000, -0.1875],
          [ 2.2812,  3.1445,  3.3750,  1.7900,  0.0000, -0.3164],
          [ 3.0000,  3.8750,  4.0000,  2.0938,  0.0000, -0.3750],
          [ 1.6875,  2.1426,  2.1875,  1.1406,  0.0000, -0.2051],
          [ 0.0000,  0.0000,  0.0000,  0.0000,  0.0000,  0.0000],
          [-0.2812, -0.3633, -0.3750, -0.1963,  0.0000,  0.0352]]]])

In 1.9.0:

>>> m(input_3x3)
tensor([[[[ 0.6836,  1.0785,  1.7512,  1.4892,  0.4405, -0.1887],
          [ 1.4494,  1.8843,  2.6328,  2.1153,  0.6240, -0.2736],
          [ 2.7467,  3.2533,  4.1369,  3.1862,  0.9380, -0.4186],
          [ 2.4497,  2.8227,  3.4780,  2.6375,  0.7759, -0.3485],
          [ 0.7261,  0.8357,  1.0282,  0.7792,  0.2292, -0.1030],
          [-0.3053, -0.3551, -0.4425, -0.3374, -0.0993,  0.0445]]]])

Does anyone know what’s happening here? which version is the correct implementation?

One difference is the change in align_corners since 0.4.0, so I’m unsure it fits your issue. This is also given as a warning message, which you might have missed:

UserWarning: Default upsampling behavior when mode=bicubic is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
x = torch.tensor([[[[1., 2., 0.],
                    [3., 4., 0.],
                    [0., 0., 0.]]]])

m = torch.nn.Upsample(scale_factor=2, mode='bicubic')
out = m(x)
print(out)
> tensor([[[[ 0.6836,  1.0785,  1.7512,  1.4892,  0.4405, -0.1887],
            [ 1.4494,  1.8843,  2.6328,  2.1153,  0.6240, -0.2736],
            [ 2.7467,  3.2533,  4.1369,  3.1862,  0.9380, -0.4186],
            [ 2.4497,  2.8227,  3.4780,  2.6375,  0.7759, -0.3485],
            [ 0.7261,  0.8357,  1.0282,  0.7792,  0.2292, -0.1030],
            [-0.3053, -0.3551, -0.4425, -0.3374, -0.0993,  0.0445]]]])

m = torch.nn.Upsample(scale_factor=2, mode='bicubic', align_corners=True)
out = m(x)
print(out)
> tensor([[[[1.0000, 1.5320, 2.0160, 1.7440, 0.8480, 0.0000],
            [1.9920, 2.6285, 3.1695, 2.6276, 1.2660, 0.0000],
            [2.9360, 3.6516, 4.2262, 3.4276, 1.6433, 0.0000],
            [2.6640, 3.2348, 3.6778, 2.9532, 1.4127, 0.0000],
            [1.3080, 1.5807, 1.7905, 1.4348, 0.6860, 0.0000],
            [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]])

Thanks for the reply.

Yes, I notice that the default setting is align_corners=False.

When align_corners set to True, the result from 1.1.0 is the same as 1.9.0, which you have posted.

As you may notice, the right bottom element in the 1.1.0 result is not 0, so it does not follow the behavior of align_corners=True.

Hope to get further hint.