Assume I want to train a neural network inhertining from nn.Model and involving BatchNorm in some of the hidden layers. I want to evaluate the network in between the optimization steps. How would I best do this?
My current understanding is, that I do the following after calling backprop:
- Manually set .train = False on each layer
- Pass the input as volatile and evaluate the output
- Manually set .train = True on all layers
- Continue training
Is there a better way to do that? Especially without manually setting .train = False on every layer?