Static quantizing and batch norm error (could not run aten::native_batch_norm with args from QuantCPUTensorid backend')

Hi all,

Working on static quantizing a few models and hitting this error on a basic Resnet18 - any insight into what is missing to complete the quantization?
Did not ‘fuse’ the BN but unclear if that is the core issue?
Any assistance would be appreciated - very hard to find much documentation.

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in batch_norm(input, running_mean, running_var, weight, bias, training, momentum, eps)
   1921     return torch.batch_norm(
   1922         input, weight, bias, running_mean, running_var,
-> 1923         training, momentum, eps, torch.backends.cudnn.enabled
   1924     )
   1925 
RuntimeError: Could not run 'aten::native_batch_norm' with arguments from the 'QuantizedCPUTensorId' backend. 'aten::native_batch_norm' is only available for these backends: [CPUTensorId, CUDATensorId, MkldnnCPUTensorId, VariableTensorId].


Thanks

I think you need to fuse BN since there’s no quantized BN layer.

1 Like

Thanks very much - will try to fuse! The docs implied it was more to boost accuracy vs a requirement but makes that it won’t otherwise quantize itself so to speak.

1 Like

That was in fact the issue (lack of fusing). Thanks very much @hx89!

1 Like