Quantizing an existing object detector with ResNet backbone

  • Working on Windows with Pytorch 1.9.1+cu102
  • I have an object detector that runs with a Torchvision FP32 ResNet-18 backbone.
  • Added QuantStub and DequantStub to my model, calling the quant stub on the input and the dequant stub on the output (in the model’s forward method)
  • Quantized the model using fbgemm

When the ResNet model tries to perform out += identity (From Torchvision implementation, BasicBlock class), I get the following error:

File "d:\anaconda3\lib\site-packages\torchvision\models\resnet.py", line 80, in forward
    out += identity
NotImplementedError: Could not run 'aten::add.out' with arguments from the 'QuantizedCPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::add.out' is only available for these backends: [CPU, CUDA, Meta, MkldnnCPU, SparseCPU, SparseCUDA, SparseCsrCPU, BackendSelect, Named, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, UNKNOWN_TENSOR_TYPE_ID, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].

How can I overcome it?

Check out here (Quantization — PyTorch 1.9.1 documentation) for a description of this error, it means you are passing a quantized tensor to a fp32 kernel.

For eager mode quantization, you could edit the model code to replace the addition with FloatFunctional.

For a backbone such as ResNet18, you could also use FX graph mode quantization which will do the syntax transforms for you automatically. You could check out here (Quantization — PyTorch 1.9.1 documentation) for an API example (scroll down to FX Graph Mode Quantization).

Hi, I face same issue, did u fix that ?

this is a user error that can only be fixed at user side, see Quantization — PyTorch 2.0 documentation for instructions on how to fix