How to avoid Quantization warning: "Must run observer before calling calculate_qparams."?

I worked with the pytorch tutorial for static quantization and when running the line:

torch.quantization.convert(per_channel_quantized_model, inplace=True)

I receive the following warning:

.../torch/quantization/observer.py:845: 
UserWarning: must run observer before calling calculate_qparams. Returning default scale and zero point 

I call the convert function within the following lines of code:

per_channel_quantized_model = load_model(..)
per_channel_quantized_model.eval()
per_channel_quantized_model.fuse_model()
per_channel_quantized_model.qconfig = torch.quantization.get_default_qconfig('fbgemm')
print(per_channel_quantized_model.qconfig)
torch.quantization.prepare(per_channel_quantized_model, inplace=True)
evaluate(per_channel_quantized_model, ...)
torch.quantization.convert(per_channel_quantized_model, inplace=True)

Does somebody have an idea what the warning means and how I can avoid that? I appreciate any hints and suggestions!

1 Like

Facing the same issue. torch.quantization.convert is supposed to run the observers, right. This warning does not make sense.

The prepare script inserts the observers. After that when model forward is run it also runs the observers.
If you call convert without calling prepare then can complain about not running observers.

Which model are you running this on? We can take a look if there is a repro.

Thanks for your replies, @khizar-anjum and @supriyar!

After @khizar-anjum comments, I also filed a issue on github. The warning is thrown when running the static quantization tutorial. I also received the warning in a SSD-type model I wrote. The quantization lead to a low accuracy and I began asking myself if it was caused by the improper quantization the observer warns against.

1 Like

Hello, I also encountered this problem, is there any latest solution, thank you!