'shuffle' in dataloader

You are welcome!
I do not know definition of your model, but I get you are using BatchNorm and maybe Dropout. This layers act differently during eval and train mode. For instance, if model.train is used, then mean/variance in BatchNorm will be updated even on validation set, for every batch, you will have different parameters for all BatchNorm which is not desirable as we looking to update mean/variance in BatchNorm only in training step not testing or validation. That is, after training, in validaiton step, each time a different BatchNorm is being used which has been updated based on previous batch in validation step. So, overall loss will not be same.

bests