I understand that in PyTorch, dropout is automatically disabled when not in training mode. However, I observed that changing the dropout value during inference affects the results. Are the conditions for disabling dropout only being not in training mode, or is it specifically when eval()
is activated?
“Not in training mode” and calling .eval()
is the same as internally .eval()
will set the self.training
attribute of the module (and all regustered submodules) to False
.
Do you have a minimal code snippet showing this behavior?
sorry for late reply…
from model_zoo.RCF import RCF
model = RCF(dropout_prob = 0)
...
model.load_state_dict(torch.load(model_path))
model.to(device)
...
imgs = Variable(imgs.cuda())
outputs = model(imgs)
This is all code using model and when I set dropout different makes varying result. I didn’t set anny mode (train, eval) and this case What cause result varying?
If you don’t explicitly call model.eval()
the default training mode is active, dropout thus also activated, and it’s expected to see different outputs.
1 Like