Continue train after eval has different accuracy than only train

Hi all,
Say I have a model training for 10 epoch. If I want to eval the model after every epoch, here are the two experiments.
Exp1.

for epoch in range(10):
    model.train()
    calculate_train_accuracy()

Exp2.

for epoch in range(10):
    model.train()
    with torch.no_grad():
        for i, data in enumerate(dataset_test.dataloader):
            model.data = data 
        model.eval()
    calculate_train_accuracy()

Shouldn’t the two experiments have the same accuracy in last epoch if I set same random seed? But I got different accuracy values. Any of you have a idea about this?
Thank you all.


Solution: I figured out the difference was related to dataloader. Even if you fix the random seed outside epoch for loop, if you set shuffle=True in dataloader, the random seed will be messed up between test and training sets.