Applying Monte Carlo Dropout in CNN as Bayesian approximation

I am trying to implement Bayesian CNN using Mc Dropout on Pytorch, as I know we apply it during both the training and the test time, and we should multiply the dropout output by 1/(1-p) where p is the dropout rate, could anyone confirm these statements, or give me an example of how should be the code, please.

The dropout scaling is already done in the backend, if you are using the nn.Dropout module:

drop = nn.Dropout()
x = torch.ones(10)
out = drop(x)
> tensor([0., 0., 2., 2., 0., 0., 0., 2., 0., 0.])

so you don’t need to reapply it.

1 Like