Code reproducibility strange behaviour

Hey, I have a strange code reproducibility issue:


torch.manual_seed(0)
torch.cuda.manual_seed(0)
np.random.seed(0)
random.seed(0)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False

All these options are set at the beginning of the training function, num of workers for dataloader is set to zero. I use only one GPU for training. Dataloader loads the data once and then only utilizes it.

At the first launch of training function I have RESULT 1, but at the second and all next launches I got RESULT 2.

If I increase the number of training epochs, at the first launch I have RESULT 2 + some epochs passed, at all next launches I got RESULT 3.

Any ideas what can cause this issue?

You might get non-deterministic results, if you don’t set all required flags mentioned in the reproducibility docs.
Could you take a look at these docs and add e.g. torch.use_deterministic_algorithms(True) to the script?