Different behavior in stochastic forward pass

How can I have different behavior for stochastic layers in the forward passes? For example, with bith batch normalization and dropout present, how can I prevent BN to update its mean and variance while dropout to behave as if in the training phase? Or in other words, use BN in eval mode while dropout in train mode in a forward pass.
Thanks a lot

Don’t use BN then, simple.

You could also do:

net = net.eval()
for m in net.modules():               
    if 'Dropout' in str(type(m)):     
        m.train()         

Exactly what I was looking for, thanks a lot.