Validation_loss_

When I train the resnet50 network, I calculate the validation set loss every time an epoch passes. Unfortunately, the verification set loss is always large, so I saved the model at this time, and then load this model to calculate the verification set loss, but It is found that the loss of the validation set is different at this time, why is this happening?
Here is the code to calculate the validation set loss:

@t.no_grad()
def val(model, dataloader):

model.eval()
criterion = t.nn.MSELoss()
loss_list = []

for ii, (val_img, score) in tqdm(enumerate(dataloader)):
    score = score.float()
    if opt.use_gpu and t.cuda.is_available():
        val_img = val_img.cuda()
        score = score.cuda()
    output = model(val_img).squeeze()
    loss = criterion(output, score)
    
    loss_list.append(loss.item())
loss_list = np.array(loss_list)
val_mse = np.mean(loss_list)

model.train()
return val_mse

How large is the difference and are you using any random transformations in your dataset?