Hello everybody!
I did as following and had the problem:
Code:
def seed_torch(seed=0):
random.seed(seed)
os.environ[‘PYTHONHASHSEED’] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)# if you are using multi-GPU.
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
seed_torch()
def main():
…
rawnet.train()
for i, data in enumerate(train_dataloader):
seed_torch()
…
…
seed_torch()
rawnet.eval()
seed_torch()
with torch.no_grad():
for i, data in enumerate(val_dataloader):
seed_torch()
…
The different results with each training time, even the same code, environment,…everything. Particularly, I have almost the same results of training loss. However, I have DIFFERENT results of validation loss, even putting seed_torch() everywhere in loops. Please help me solve this problem!!!. Thank you!
Are your results very different? I have an MNIST app, the loss and accuracy varies a little bit every time I run the app. But the variation is tiny. For example, in terms of accuracy, the change is below 0.5%.