The prepare script inserts the observers. After that when model forward is run it also runs the observers.
If you call convert without calling prepare then can complain about not running observers.
Which model are you running this on? We can take a look if there is a repro.
After @khizar-anjum comments, I also filed a issue on github. The warning is thrown when running the static quantization tutorial. I also received the warning in a SSD-type model I wrote. The quantization lead to a low accuracy and I began asking myself if it was caused by the improper quantization the observer warns against.
class InvertedResidual(nn.Module):
def init(self,in_channel,out_channel,stride,expand_ratio):
super(InvertedResidual, self).init()
hidden_channel=int(round(in_channel*expand_ratio))
self.shortcut=stride==1 and in_channel==out_channel
RuntimeError: Could not run ‘aten::add.Tensor’ with arguments from the ‘QuantizedCPUTensorId’ backend. ‘aten::add.Tensor’ is only available for these backends: [CPUTensorId, MkldnnCPUTensorId, SparseCPUTensorId, VariableTensorId].