I tried to seed everything so that I can reproduce the same result in PyTorch by using this block of code
def seed_everything(SEED): np.random.seed(SEED) torch.manual_seed(SEED) torch.cuda.manual_seed(SEED) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False
and It was successful. But the problem arose when I removed the validation part(I just removed the validation code block, not the validation data) in my train function. I thought the result of each training epoch should match with my previous kernel where I used both the training and validation part in my train function. But it was not and the results were different.
My question is how come
model.eval() with torch.no_grad():
is affecting my training as these lines turn off everything?
I didn’t use any metric which decides learning rate depending upon validation loss/accuracy.