Can validation code block affect training?

I tried to seed everything so that I can reproduce the same result in PyTorch by using this block of code

def seed_everything(SEED):
    np.random.seed(SEED)
    torch.manual_seed(SEED)
    torch.cuda.manual_seed(SEED)
    torch.backends.cudnn.deterministic = True
    torch.backends.cudnn.benchmark = False

and It was successful. But the problem arose when I removed the validation part(I just removed the validation code block, not the validation data) in my train function. I thought the result of each training epoch should match with my previous kernel where I used both the training and validation part in my train function. But it was not and the results were different.

My question is how come

model.eval()
with torch.no_grad():

is affecting my training as these lines turn off everything?

I didn’t use any metric which decides learning rate depending upon validation loss/accuracy.

Hello Aptha!

Well, any number of things that could affect your training could be hiding
in the code you allude to.

One possibility is that if both your training and validation code consume
pseudo-random numbers, turning off your training could cause different
pseudo-random numbers to be fed to your training, affecting the training
results.

For example, if both your training and validation involve using a random
DataLoader, you could see this kind of issue.

Best.

K. Frank

So basically by removing the validation code block, the random number generated is different compared with the previous version where there is validation code.
I think it makes sense as I noticed the difference in the 2nd epoch and not the 1st epoch.

Thanks, K. Farnk