Why doesn't setting random seed give same performance across runs?

Hello everyone,

I am training some deep learning models using pytorch, which also includes the usage of numpy. Since the randomisation is not truly random and it is pseudo-random, why aren’t the numbers i.e. accuracy etc same across different runs?

I am doing torch.cuda.manual_seed(seed_val) to set the random seed.

I mean, even if I do not set some random seed, there should be some default random seed according to which my code must run and give same results across different runs. Is there something more to it?

Please do let me know if something is not clear.
Thanks,
Megh

By default the system PRNG is used, if I’m not mistaken, so your code will not be deterministic if you don’t properly seed it manually.

Since you are using 3rd party libraries, such as numpy, I would recommend to also seed them.
Have a look at the Reproducibility docs for more information.

Thanks a lot @ptrblck. Yes, I have gone through the docs.
Thanks again,
Megh