Can I use batchnorm in torch script and evaluate model during training?

From the document, operations that have different behaviors in training and eval modes will always behave as if it is in the mode it was in during tracing. Batchnorm behaves differently in training and eval. Does it mean that if I use batchnorm in ScriptModule, I cannot evaluate it on the test set during the whole training?

You can.

If the training is not stable, then test/eval does not lead to better accuracy as compared to training.
In such a cases, you can do model.train() before testing/eval so that, batchnorm running mean is adjusted and hence can be used without hampering on test/eval accuracy due to batchnorm.

Just to ensure, @smth have I understood it correctly?

@smth Hi, could you please tell us your opinion on this question? Thanks!

in a ScriptModule, .train() and .eval() are correctly supported in 1.0.
There is no difference from behavior of regular nn.Module.

See these unit tests for example: https://github.com/pytorch/pytorch/blob/e9db9595d23683443456b8f17e4d28c4519beaa4/test/test_jit.py#L635-L672

Thanks for your reply. So the doc is inappropriate? :slight_smile:

the doc is correct. that note is for the tracing mode, not for scripting mode.