Does Pytorch support fused/folded BatchNormalization?

Hello everyone, hope you are having a great day.
I’m curious to know whether Pytorch (as of latest version) have support for Fused BatchNormalization.
Basically FusedBatchNormalization is simply the fusion of BatchNormalization into precdeding convolutional neural network since, the parameters after training are fixed and can thus be used as constants.
Based on tensorflow’s documentation, it provides 12% to 30% boost of performance at inference time which is a considerable gain. Link

Fused batch norm combines the multiple operations needed to do batch normalization into a single kernel. Batch norm is an expensive process that for some models makes up a large percentage of the operation time. Using fused batch norm can result in a 12%-30% speedup.

There are two commonly used batch norms and both support fusing. The core tf.layers.batch_normalization added fused starting in TensorFlow 1.3.

I know we had a Pr back in 2017 which was rejected. but I dont know if we have it implemented or not!
Any update in this regard is greatly appreciated.

Update :
Here is a Pytorch implementation from Intel’s NervanaSystems : Folded_batch_normalization

1 Like

PyTorch currently has torch.nn.utils.fuse_conv_bn_eval (and I think some ConvBnRelu fusion with kernels only in quantized setting) which you must call manually (for details look at the source code, it’s really simple code), but it’s not documented yet

2 Likes