Achieving different results when I use the nn.functional.upsample?

Recently, I utilize the nn.functional.upsample function, and I set all the seed, cudnn deterministic the same, just as follows:

random.seed(config.seed)
np.random.seed(config.seed)
torch.manual_seed(config.seed)
torch.cuda.manual_seed_all(config.seed)
torch.cuda.manual_seed(config.seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True

I use the same code and train the code twice, but I got two different results. And finally, I find the reason is casued by the upsample function.

The torch version is 0.4.0, and cuda8.0.

So, how should I solve the problem? Thank you very much.

Could you update PyTorch and then post a reproducible code snippet?
I know that upsample was non-deterministic in the past, but that might have changed, as this code returns the same results:

torch.manual_seed(2809)
x = torch.randn(2, 3, 24, 24).cuda()

up = torch.nn.functional.upsample(x, size=(100, 100))

for _ in range(10):
    print((up - torch.nn.functional.upsample(x, size=(100, 100))).abs().max())