Are Random Functions Deterministic Given Seed?

Is there any guarantee that when given the same seed, PyTorch random functions (e.g., torch.randperm ) will always generate the same results in any case (e.g., across different PyTorch versions, machines, GPU or CPU)?

1 Like

No, PyTorch CPU and GPU use different random number generators (RNGs), which means that even if you set the same seed, you will get different random numbers on the CPU and GPU.

Here’s why:

  • Different Algorithms:

CPU and GPU use different algorithms for random number generation, which leads to different sequences even with the same seed.

  • Hardware Differences:

GPUs are optimized for parallel processing, which can affect the way they generate random numbers compared to CPUs.

How to Ensure Reproducibility:

  • Use the Same Device:

If you need reproducible results, always use the same device (either CPU or a specific GPU) for your experiments.

  • Set the Seed:

Use torch.manual_seed() to set the seed for the CPU RNG and torch.cuda.manual_seed() for the GPU RNG.

  • Save and Load RNG State:

If you need to reproduce results across different sessions, you can save the RNG state using torch.get_rng_state() and load it later using torch.set_rng_state().

As for different PyTorch versions, that depends on whether the algorithm used, and its implementation, is exactly the same, or not.

As for torch.randperm, the in-source documentation shows that internally it calls argsort (or maybe sort ?) with stable=True. That implies the result will be always be the same if the RNG+seedstate is the same.

I have an unanswered question about whether argsort/sort with stable=False is always reproducible (a.k.a. “deterministic” given the same inputs). It could conceivably be reproducible, but that depends on whether there are potential race conditions in the algorithm. That is a separate issue from the RNG+seed.

See also Randomness

1 Like