BatchNorm - evaluation while training

Assume I want to train a neural network inhertining from nn.Model and involving BatchNorm in some of the hidden layers. I want to evaluate the network in between the optimization steps. How would I best do this?

My current understanding is, that I do the following after calling backprop:

  1. Manually set .train = False on each layer
  2. Pass the input as volatile and evaluate the output
  3. Manually set .train = True on all layers
  4. Continue training

Is there a better way to do that? Especially without manually setting .train = False on every layer?

You should call model.eval() before testing and model.train() before starting a training loop.
http://pytorch.org/docs/master/nn.html#torch.nn.Module.eval

You won’t need to see the train attribute manually for every layer.