Yeah, I read this a number of times, but there is an old answer by Sam Gross, which states: “You should get consistent random numbers if you’re using the same seed, PyTorch version, and CUDA version even if it’s run on a different physical GPU.” Is this statement incorrect then?