Hi everyone,
I had an old image super resolution model trained using PyTorch 1.1.0. When I test it using PyTorch 1.9.0, I found that the result changed.
Found that the output of nn.Upsample(scale_factor=2, mode='bicubic')
has changed from 1.1.0 to 1.9.0.
Example here:
>>> input_3x3
tensor([[[[1., 2., 0.],
[3., 4., 0.],
[0., 0., 0.]]]])
>>> m = nn.Upsample(scale_factor=2, mode='bicubic')
In 1.1.0:
>>> m(input_3x3)
tensor([[[[ 1.0000, 1.6875, 2.0000, 1.0938, 0.0000, -0.1875],
[ 2.2812, 3.1445, 3.3750, 1.7900, 0.0000, -0.3164],
[ 3.0000, 3.8750, 4.0000, 2.0938, 0.0000, -0.3750],
[ 1.6875, 2.1426, 2.1875, 1.1406, 0.0000, -0.2051],
[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[-0.2812, -0.3633, -0.3750, -0.1963, 0.0000, 0.0352]]]])
In 1.9.0:
>>> m(input_3x3)
tensor([[[[ 0.6836, 1.0785, 1.7512, 1.4892, 0.4405, -0.1887],
[ 1.4494, 1.8843, 2.6328, 2.1153, 0.6240, -0.2736],
[ 2.7467, 3.2533, 4.1369, 3.1862, 0.9380, -0.4186],
[ 2.4497, 2.8227, 3.4780, 2.6375, 0.7759, -0.3485],
[ 0.7261, 0.8357, 1.0282, 0.7792, 0.2292, -0.1030],
[-0.3053, -0.3551, -0.4425, -0.3374, -0.0993, 0.0445]]]])
Does anyone know what’s happening here? which version is the correct implementation?