How to fix the result during inference at models using batchnorm

The answer from the linked post explains, that the running statistics in batchnorm layers will be updated during training and used during evaluation (model.eval()).
If you want to keep these stats constant, use model.eval() and don’t perform any forward passes while the model is in training mode.

1 Like