Can't reproduce training process even if i have fix random seed on different platforms

I want to reproduce on two different platforms. I fixed the random seed with the following code:

def FixedSeed(seed: int = 1122) -> None:
    random.seed(seed)
    np.random.seed(seed)
    torch.manual_seed(seed)
    if torch.cuda.is_available():
        torch.cuda.manual_seed(seed)
        torch.cuda.manual_seed_all(seed)
    torch.backends.cudnn.benchmark = False
    torch.backends.cudnn.deterministic = True

The training process is reproducible on each platform, but the training process is not exactly the same on different platforms. Does anyone know what is causing the problem?

Hi Jim!

This is to be expected. Pytorch does not offer any assurance that
computations will be exactly reproducible across platforms.

As long as your results agree within reasonable round-off error (say,
after a single forward and backward pass – deviations may accumulate
if you train for many iterations), you are doing as well as can be expected.

Best.

K. Frank

Thanks for your answer!