I am testing the reproducibility of the results of the code on different server devices. Both servers are trained with a single card, one card is 2080Ti and one card is 3080, and the operating systems are Ubuntu 18.04 and Ubuntu 20.04 respectively. Both servers use the same conda virtual environment, i.e. the same pytorch, numpy, python versions. Also using the same code, the same random seed, etc. But strangely, I got different results, the estimated accuracy of the models trained on the two servers differed by almost 2%. This is confusing me, hope someone can help me.
Deterministic results via seeding etc. are not guaranteed between different hardware/software setups.
Depending on the variance seen in the final accuracy using different seeds, the difference of 2% might be expected.
Sincerely thank you for your reply, it has benefited me a lot.