Performance drops dramatically when switch to model.eval

Frankly, I am doing an unsupervised domain adaptation project, so I figure out what is influencing the performance when using BatchNorm layers just now.

Here is the solution link which is helpful folayersr those UDA project with BatchNorm.
Possible Issue with batch norm train/eval modes

For a short word, this answer helps me a lot.
https://discuss.pytorch.org/t/possible-issue-with-batch-norm-train-eval-modes/31632/2?u=dongdongtong

But it is very strange to do this because when we deploy a model pretrained on the training set, we don’t hope to change the parameters of the pretrained model, so I am exploring other solutions.