Differences between eval() and train() modes

Which modules are affected by the modes except BatchNorm and Dropout? I was wondering in which cases the two modes are interchangeable.

The train() and eval() call change the internal self.training flag, so you could grep for it in the source folder (grep -r self.training).
Currently it seems these modules are affected by it in the PyTorch core:

  • Quantization modules
  • Dropout
  • InstanceNorm
  • BatchNorm
  • RNN (probably only if using cudnn, as different implementations will be called)
  • RReLU

Besides that all custom nn.Modules might of course use this flag internally.

1 Like