Even though the seed is fixed, I still get different results

This is my main function:

if __name__ == '__main__':
    args = get_settings()
    print('===================All is finished====================')

Every time I run this code with the same hyper-parameters, I got different performance on test dataset.
And this is my set_random_seed() function:

def set_random_seed(seed):
    os.environ['PYTHONHASHSEED'] = str(seed)
    torch.backends.cudnn.deterministic = True
    torch.backends.cudnn.benchmark = False

I even add torch.manual_seed(seed) and torch.cuda.manual_seed_all(seed) in init() of every model or submodel. The trained models still got different performance on test dataset.

By the way, the problem won’t lie in test code. Because I have test the same trained model several times, and every time got the same results.

I am so confused :frowning_face:. Can anyone give me a hand?

Have a look at the Reproducibility docs and in particular set torch.use_deterministic_algorithms(True), which would yield an error, if no deterministic algorithms can be found for your workload.

Thanks. Very helpful! :smile: