Using dropout in evaluation mode

Is there a simple way to use dropout during evaluation mode?

I have a plan to use a pre-trained network where I reapply the training dropout, and find outliers by measuring the variance in predictions -all done in evaluation mode so that no gradients are backpropogated.

Many thanks in advance!

Assuming that you are using the dropout modules.

for m in model.modules():
  if m.__class__.__name__.startswith('Dropout'):

Thanks for the fast reply! So I assume this makes an exception for ‘Dropout’ and sets it active using train() but leaves batchnorm etc. switched off in eval mode?

yep, that is the idea!

1 Like

One quick question: will this method zero different nodes every time you present a new image to the network like it would in training?

if you don’t reset the seed, yes

That’s great, I was just checking because I was unsure if it would use a particular dropout selection and then stick to that selection until the model was re-instantiated with a line like “m.eval()” before presenting a new image

Hi Simon, Thanks for suggesting this approach! I tried it, and it gives predictions with certain randomness for in-class inputs. But the randomness is almost non-existent for out-of-class inputs. However, if I put the model in the training mode without back propagation (so that the weights are not updated) using model.train(), I observe significantly more randomness in the predictions for both in-class and out-of-class inputs, but the accuracy for the in-class inputs becomes very low (dropped from 95% to 63%). To me, the two approaches should give the same results. Any thoughts on the different behavior? My model also has batch norm. Thanks.

1 Like