Confusing situation in model training

Although batch_norm is deactivated, lr_rate and momentum is 0, model loss and accuracy changes.What can cause this?

Could you explain the use case you are working on and when exactly the accuracy and loss change?
Are you seeing different losses for the same input batch while the model is in evaluation mode via model.eval()?

The problem is now solved.
When I save model and load it, model state_dict’s keys start with “module.” word.
And when I wanted to replace my model params with the saved model params, there was a key mismatch. But I informed about it from jupyter notebook, not from actual script. The script didn’t showed me any warnings:)