Annealing Dropout

Assuming I have self.dropout1 = nn.Dropout() layer defined in the init,
is it OK to anneal the dropout rate in the training loop by modifying it like self.model.dropout1.p = 0.9 , 0.8 … 0.0?

Or is there a better way?

It should work. However, a more decent way I would say is to use torch.nn.functional.dropout http://pytorch.org/docs/nn.html?highlight=dropout#torch.nn.functional.dropout
like

x = F.dropout(x, p, self.is_training)

If I use the functional, that functional layer is not included, when I print out my model with print(self.model). Is there a way to circumvent that ? I find it quite useful to print out a model as a final check…

I see. I guess if you want to print it out, your proposed method is the best solution.

Good to know. Many thanks.