Why the result is changed after model.eval()?

I used DenseNet to classify images into some categories.

As you know, model.train() is called while training a model, and model.eval() is called while evaluating a model.

Before a test by using “evaluation data”, I used “training data” to evaluate the model,
If the mode is train(), the AAC was 96.25%, but the mode is changed to eval(), the AAC was 83.02%.
(Of course, the same model and the same training data)

Is there anyone who faces the same problem?
What is the course of the above problem?

switching to evaluation mode fixes some of the model’s parameters,
for eg: batch norm uses collected statistics instead of actual batch statistics, dropouts are fixed, etc.
all the above may effect the performance of your model over the data, especially since you use the training data for evaluation.

1 Like

Thanks for your comment.
I understand that the model’s parameters are fixed by model.eval().
However, I’m not sure that such a big difference is caused.
Whenever I use Densnet, should I set model.train() even for evaluation?

first I suggest to evaluate the model on testset.
you can try and see if there is a difference if when you evaluate you use with torch.no_grad() instead on switching to eval mode
however no reason to perform inference in training mode

I will try to apply torch.no_grad() according to your advice.
After the trial, I will report the result.

note that torch.no_grad() just turns off the gradients tracking while .eval() fixes parameters and randomness.