How to use batch normalization and make learning the bias False but the rest of learning stuff be True?

I want to use batchnorm but I want to make the bias not learnable , how can I specify that? the affine parameters will make both of gamma and beta true or false, but I want to make the beta not learnable

maybe this is sth you want to do?

import torch
from torch import nn

m = nn.BatchNorm2d(4)
x = torch.randn(2,4,16,16)

m._parameters['bias'].requires_grad_(False)
print(m._parameters['weight'].grad)
print(m._parameters['bias'].grad)
m.zero_grad()
y = m(x).mean()
y.backward()
print(m._parameters['weight'].grad)
print(m._parameters['bias'].grad)

output

None
None
tensor([ 1.4578e-10, -2.0856e-09, -5.6265e-11, -1.4465e-09])
None
1 Like