Updating batch normalization momentum

Similar to a learning rate schedule, it seems a fair number of networks implemented in TensorFlow use a momentum schedule for batch normalization. Is it possible to do something similar in PyTorch, without losing the running mean/variance?


You can change the batch_norm_obj.momentum attribute, or use the functional form F.batch_norm http://pytorch.org/docs/master/nn.html#torch.nn.functional.batch_norm

How does one change the momentum attribute for batch norm included as a layer?

Not sure if I understand your question, but Iā€™d do bn.momentum = 0.01

Sorry, let me clarify.

I have network components defined in blocks like

self.block1 = nn.Sequential(

How can I modify the momentum of those batch norm layers in the network?

Oh I see. You can use python indexing to get the layers in a nn.Sequential container. So it would be self.block1[1].momentum = .... If you want to change momentum for all BN modules in a network, you can iteration through your networks net.modules and test if it is a BN module.

1 Like

That did the trick. Thanks!