Assuming that self.dropout
is torch.nn.Dropout
, then your dropout_out
variable will be equal to x.float()
in evaluation mode. During evaluation the dropout function is an identity function.
If you want to apply dropout during evaluation (e.g. to implement Monte Carlo Dropout or similar), then you can use the torch.nn.functional.dropout
function instead of the module. The functional form has a boolean training
parameter that allows you to explicitly control whether to operate in training or evaluation mode.