This is my main function:
if __name__ == '__main__':
args = get_settings()
set_random_seed(args.seed)
train(args=args)
test(args=args)
print('===================All is finished====================')
Every time I run this code with the same hyper-parameters, I got different performance on test dataset.
And this is my set_random_seed() function:
def set_random_seed(seed):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
I even add torch.manual_seed(seed) and torch.cuda.manual_seed_all(seed) in init() of every model or submodel. The trained models still got different performance on test dataset.
By the way, the problem won’t lie in test code. Because I have test the same trained model several times, and every time got the same results.
I am so confused . Can anyone give me a hand?