Correct Model setting for validation with batchnorm and dropout

Hi,
I am a new to pytorch.
I was searching for some solutions on forum and somebody mentioned that I have to do " model.eval() " for both validation and testing phase, when I have batchnorm and dropout layers in my model definition. So, can somebody clarify this in a little detail, how and why to do this?.

Here is how my code roughly looks like:
Should I add model.eval() before computing outputsD via forward pass in validation phase.
Note that I am not reloading model just doing a forward pass after training model over all batches of data.

#code

model = Built_CNN(input_size, classes)

epochs Loop until maxiter

#Training Loop over dataset:
inputs = Variable(data)
optimizer.zero_grad()
outputs = model(inputs)



#Valdiation Loop over dataset:
inputsD = Variable(dataD, volatile=True)
outputsD = model(inputsD)

In general I would suggest to use .train() and .eval() always as a good coding style, so that you won’t run into future errors, even if your current model doesn’t have any layers depending on this flag.

Usually you call model.train() before you iterate your train dataset, i.e. before the first forward pass. The same goes for the model.eval() call.

Have a look at the train() and test() functions in the MNIST example.

@ptrblck thanks for a quick response. I will surely test this now.