Saved CNN model giving different results

I trained the batch normalized vgg16 CNN model and saved the best model parameters. During training, the best accuracy on the vaildation set was 97.65%. I loaded the saved parameters using model.load_state_dict and ran the model on the validation set again. I did set my model to eval mode using model.eval() and also used torch.no_grad(). But the accuracy differs now.
original- 97.65%
after loading- 96.45%, 97.12%, 96.87% on 3 different runs.
What could be the issue?

You could debug this by:

def areTorchModulesEqual(module1, module2):
    for index, (p1, p2) in enumerate(zip(module1.parameters(), module2.parameters())):
        if p1.data.ne(p2.data).sum() > 0:
            return False
    return True

def whichModulesHaveBeenUpdated(model1_list, model2_list):
    for index, module1 in enumerate(model1_list):
        module2 = model2_list[index]
        if areTorchModulesEqual(module1, module2):
            print("At Index " + str(index) + " Module1 and Module2 are equal, below is the printed module")
            print(module1)
            print("\n")
        else:
            print("At index " + str(index) + " Module1 and Module2 are not equal, below is the printed module")
            print(module1)
            print("\n")
    print("End of Modules")

Also does your model have batch-normalization or dropout?
It could also be an inconsistency with your test data iterator.