Fix the randomness of one model

I want to fix the randomness of the training for compare the performance of different super-parameters. Now I fix the random seed of random.seed(), numpy.random.seed(), torch.manual_seed() and torch.cuda.manual_seed(), but I found I still can’t get the same result in different training process, even I set the p of dropout as 0. Now the result seems to be similar but there is still some small difference, especially the AUC value.
I guess the SGD may still start from different random state but I can’t check well. And I still want to use dropout with non-zero p in the training but I don’t know how to keep it in the same random state. Please give me some advice if you have idea about this question. Thank you !

Additionally to setting the seeds you should also disable cudnn.benchmark and enable the cudnn deterministic behavior, if you are using a GPU. Have a look at the Reproducibility docs for more information.

Also, how large are the differences? As explained in the docs, some atomic operations might yield non-determinisitc behavior.

Thank you, it works. Now the results are strictly the same.