Pytorch Batchnorm to use unbiased variance

I require the BatchNorm Layers in pytorch to used unbiased variance estimates, without the bessel correction (‘n-1’ term in denominator of variance calculation)

Unbiased estimates give different results if you scale up the number of elements and I do not want that, check my other post why I need it if you want more information.

Any idea of how it can be done or will I have to make my custom batchnorm for my case?

Hi,
I recommend you write your custom batchnorm as you said because BatchNorm functions do not have correction terms for running statistics.

As to implementing your custom batchnorm, you can write your batchnorm by using pytorch funcitons or making custom C++ n/ CUDA extensions reference.

FYI as you know, PyTorch have many functions and classes related to BatchNorm as you can see THNN, THCUNN, ATen native, PyTorch.
And the bessel correction term exists here.