How to avoid Quantization warning: "Must run observer before calling calculate_qparams."?

I worked with the pytorch tutorial for static quantization and when running the line:

torch.quantization.convert(per_channel_quantized_model, inplace=True)

I receive the following warning:

UserWarning: must run observer before calling calculate_qparams. Returning default scale and zero point 

I call the convert function within the following lines of code:

per_channel_quantized_model = load_model(..)
per_channel_quantized_model.qconfig = torch.quantization.get_default_qconfig('fbgemm')
torch.quantization.prepare(per_channel_quantized_model, inplace=True)
evaluate(per_channel_quantized_model, ...)
torch.quantization.convert(per_channel_quantized_model, inplace=True)

Does somebody have an idea what the warning means and how I can avoid that? I appreciate any hints and suggestions!

1 Like

Facing the same issue. torch.quantization.convert is supposed to run the observers, right. This warning does not make sense.

The prepare script inserts the observers. After that when model forward is run it also runs the observers.
If you call convert without calling prepare then can complain about not running observers.

Which model are you running this on? We can take a look if there is a repro.

Thanks for your replies, @khizar-anjum and @supriyar!

After @khizar-anjum comments, I also filed a issue on github. The warning is thrown when running the static quantization tutorial. I also received the warning in a SSD-type model I wrote. The quantization lead to a low accuracy and I began asking myself if it was caused by the improper quantization the observer warns against.

1 Like

Hello, I also encountered this problem, is there any latest solution, thank you!

see here for a solution.

Replace self.skip_add.add with torch.add

class InvertedResidual(nn.Module):
def init(self,in_channel,out_channel,stride,expand_ratio):
super(InvertedResidual, self).init()
self.shortcut=stride==1 and in_channel==out_channel

    if expand_ratio!=1:
        #1x1 pointwise conv
        # 3x3 depthwise conv

    #self.skip_add = nn.quantized.FloatFunctional()

def forward(self,x):
    if self.shortcut:
        #return self.skip_add.add(x,self.conv(x))
        return torch.add(x,self.conv(x))
        return self.conv(x)

RuntimeError: Could not run ‘aten::add.Tensor’ with arguments from the ‘QuantizedCPUTensorId’ backend. ‘aten::add.Tensor’ is only available for these backends: [CPUTensorId, MkldnnCPUTensorId, SparseCPUTensorId, VariableTensorId].

Where did I write it wrong? thanks!

1 Like

I am still facing the same issue even after following the instructions present in the link you shared. Are there any further updates to it?