Could you add 'fix_gamma' option in BatchNorm just like MxNet or TensorFlow does?

In Pytorch doc,the optional parameter of BatchNorm module only has the ‘affine’ option which means when set ‘affine=True’ it is to set ‘beta=0 & gamma=1’ in BatchNorm.But actually this option is not very useful in real network setting.

Beside,‘fix_gamma’ is a very common way to decrease the number of parameters and to make the gradient more stable(https://www.tensorflow.org/api_docs/python/tf/layers/BatchNormalization).
In order to implement this operation in Pytorch,I need to add a specify shift-layer like this(https://discuss.pytorch.org/t/is-scale-layer-available-in-pytorch/7954/6?u=kaleidozhouyn).which is both memory and time consuming,and is also not convenient while transform to other version of model(like caffe and ncnn).

Could you add this ‘fix_gamma’ option in latest Pytorch version because I think it is really helpful.
Thanks.

Ok,now i’m try to set the ‘required_grad=False’ and hope this can solve the problem