Could not run 'aten::quantize_per_tensor.tensor_qparams' with arguments from the 'QuantizedCPU'

HI:

I have a Quantization Aware Training (QAT) model, and I want to convert it to ONNX format, but I encountered an error.

This is my model.

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(3, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)
        self.pool=nn.MaxPool2d(1)
    
    def forward(self, x):
       x = F.relu(self.conv1(x))
       x1=self.pool(x)
       x2=self.pool(x1)
       x3=self.pool(x2)
       x4=self.pool(x3)      
       x5=torch.cat((x1,x4),dim=0)
       return F.relu(self.conv2(x5))

And this is how I apply QAT

def get_qconfig():
    #B is bits
    B=7
    ##intB qconfig:
    intB_act_fq=FakeQuantize.with_args(observer=MovingAverageMinMaxObserver, 
                quant_min=int(0), quant_max=int(2**B-1),
                dtype=torch.quint8, qscheme=torch.per_tensor_affine,
                reduce_range=False)
    
    intB_weight_fq=FakeQuantize.with_args(observer=MinMaxObserver,
                    quant_min=int(0), quant_max=int(2**B-1),
                    # quant_min=0, quant_max=int((2**B)/2-1), 
                    dtype=torch.qint8, qscheme=torch.per_tensor_affine, 
                    reduce_range=False)
    qconfig_dict=QConfig(activation=intB_act_fq, weight=intB_weight_fq)
    return qconfig_dict

model = quantize_fx.prepare_qat_fx(model, 
                                   {"":get_qconfig()
                                    ,"object_type":[                                 
                                    (torch.nn.modules.pooling.MaxPool2d,None),
                                    ]
                                    },example_inputs)
model = quantize_fx.convert_fx(model)

However, when I export the model to onnx

torch.onnx.export(model.to("cpu"), example_inputs, filename,
input_names=['input1'], output_names=['result_1'])

onnx_model=onnx.load(filename)
onnx_model,check=onnxsim.simplify(onnx_model)
onnx.save(onnx_model,filename)

I got this error.

NotImplementedError: Could not run 'aten::quantize_per_tensor.tensor_qparams' with arguments from the 'QuantizedCPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::quantize_per_tensor.tensor_qparams' is only available for these backends: [CPU, CUDA, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMeta, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].

whole error message

Traceback (most recent call last):
  File "/autohome/user/jimchen/anaconda3/envs/QAT3.9test_201_yolov5/lib/python3.9/site-packages/torch/fx/graph_module.py", line 271, in __call__
    return super(self.cls, obj).__call__(*args, **kwargs)  # type: ignore[misc]
  File "/autohome/user/jimchen/anaconda3/envs/QAT3.9test_201_yolov5/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/autohome/user/jimchen/anaconda3/envs/QAT3.9test_201_yolov5/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1488, in _slow_forward
    result = self.forward(*input, **kwargs)
  File "<eval_with_key>.47", line 16, in forward
    quantize_per_tensor_3 = torch.quantize_per_tensor(pool_3, _input_scale_1, _input_zero_point_1, torch.quint8);  pool_3 = _input_scale_1 = _input_zero_point_1 = None
NotImplementedError: Could not run 'aten::quantize_per_tensor.tensor_qparams' with arguments from the 'QuantizedCPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::quantize_per_tensor.tensor_qparams' is only available for these backends: [CPU, CUDA, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMeta, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].

CPU: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/build/aten/src/ATen/RegisterCPU.cpp:31034 [kernel]
CUDA: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/build/aten/src/ATen/RegisterCUDA.cpp:43986 [kernel]
BackendSelect: fallthrough registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/core/PythonFallbackKernel.cpp:144 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/functorch/DynamicLayer.cpp:491 [backend fallback]
Functionalize: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/FunctionalizeFallbackKernel.cpp:280 [backend fallback]
Named: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/native/NegateFallback.cpp:19 [backend fallback]
ZeroTensor: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/core/VariableFallbackKernel.cpp:63 [backend fallback]
AutogradOther: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradCPU: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradCUDA: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradHIP: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradXLA: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradMPS: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradIPU: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradXPU: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradHPU: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradVE: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradLazy: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradMeta: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradMTIA: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradPrivateUse1: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradPrivateUse2: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradPrivateUse3: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradNestedTensor: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
Tracer: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/TraceType_2.cpp:16726 [kernel]
AutocastCPU: fallthrough registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/autocast_mode.cpp:487 [backend fallback]
AutocastCUDA: fallthrough registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/autocast_mode.cpp:354 [backend fallback]
FuncTorchBatched: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:815 [backend fallback]
FuncTorchVmapMode: fallthrough registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/functorch/VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/LegacyBatchingRegistrations.cpp:1073 [backend fallback]
VmapMode: fallthrough registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/functorch/TensorWrapper.cpp:210 [backend fallback]
PythonTLSSnapshot: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/core/PythonFallbackKernel.cpp:152 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/functorch/DynamicLayer.cpp:487 [backend fallback]
PythonDispatcher: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/core/PythonFallbackKernel.cpp:148 [backend fallback]


Call using an FX-traced Module, line 16 of the traced Module's generated forward function:
    _input_zero_point_1 = self._input_zero_point_1
    quantize_per_tensor_3 = torch.quantize_per_tensor(pool_3, _input_scale_1, _input_zero_point_1, torch.quint8);  pool_3 = _input_scale_1 = _input_zero_point_1 = None

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
    dequantize_3 = quantize_per_tensor_3.dequantize();  quantize_per_tensor_3 = None

    cat = torch.cat([dequantize_2, dequantize_3], dim = 0);  dequantize_2 = dequantize_3 = None

Traceback (most recent call last):

  File ~/anaconda3/envs/QAT3.9test_201_yolov5/lib/python3.9/site-packages/spyder_kernels/py3compat.py:356 in compat_exec
    exec(code, globals, locals)

  File ~/project/QAT/QAT_FX/QAT_FX_yolov5-coco/untitled2.py:70
    torch.onnx.export(model.to("cpu"), example_inputs, filename,

  File ~/anaconda3/envs/QAT3.9test_201_yolov5/lib/python3.9/site-packages/torch/onnx/utils.py:506 in export
    _export(

  File ~/anaconda3/envs/QAT3.9test_201_yolov5/lib/python3.9/site-packages/torch/onnx/utils.py:1548 in _export
    graph, params_dict, torch_out = _model_to_graph(

  File ~/anaconda3/envs/QAT3.9test_201_yolov5/lib/python3.9/site-packages/torch/onnx/utils.py:1112 in _model_to_graph
    model = _pre_trace_quant_model(model, args)

  File ~/anaconda3/envs/QAT3.9test_201_yolov5/lib/python3.9/site-packages/torch/onnx/utils.py:1067 in _pre_trace_quant_model
    return torch.jit.trace(model, args)

  File ~/anaconda3/envs/QAT3.9test_201_yolov5/lib/python3.9/site-packages/torch/jit/_trace.py:794 in trace
    return trace_module(

  File ~/anaconda3/envs/QAT3.9test_201_yolov5/lib/python3.9/site-packages/torch/jit/_trace.py:1056 in trace_module
    module._c._create_method_from_trace(

  File ~/anaconda3/envs/QAT3.9test_201_yolov5/lib/python3.9/site-packages/torch/fx/graph_module.py:662 in call_wrapped
    return self._wrapped_call(self, *args, **kwargs)

  File ~/anaconda3/envs/QAT3.9test_201_yolov5/lib/python3.9/site-packages/torch/fx/graph_module.py:279 in __call__
    raise e.with_traceback(None)

NotImplementedError: Could not run 'aten::quantize_per_tensor.tensor_qparams' with arguments from the 'QuantizedCPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::quantize_per_tensor.tensor_qparams' is only available for these backends: [CPU, CUDA, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMeta, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].

CPU: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/build/aten/src/ATen/RegisterCPU.cpp:31034 [kernel]
CUDA: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/build/aten/src/ATen/RegisterCUDA.cpp:43986 [kernel]
BackendSelect: fallthrough registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/core/PythonFallbackKernel.cpp:144 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/functorch/DynamicLayer.cpp:491 [backend fallback]
Functionalize: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/FunctionalizeFallbackKernel.cpp:280 [backend fallback]
Named: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/native/NegateFallback.cpp:19 [backend fallback]
ZeroTensor: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/core/VariableFallbackKernel.cpp:63 [backend fallback]
AutogradOther: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradCPU: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradCUDA: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradHIP: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradXLA: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradMPS: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradIPU: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradXPU: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradHPU: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradVE: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradLazy: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradMeta: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradMTIA: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradPrivateUse1: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradPrivateUse2: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradPrivateUse3: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
AutogradNestedTensor: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/VariableType_2.cpp:17472 [autograd kernel]
Tracer: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/torch/csrc/autograd/generated/TraceType_2.cpp:16726 [kernel]
AutocastCPU: fallthrough registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/autocast_mode.cpp:487 [backend fallback]
AutocastCUDA: fallthrough registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/autocast_mode.cpp:354 [backend fallback]
FuncTorchBatched: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:815 [backend fallback]
FuncTorchVmapMode: fallthrough registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/functorch/VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/LegacyBatchingRegistrations.cpp:1073 [backend fallback]
VmapMode: fallthrough registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/functorch/TensorWrapper.cpp:210 [backend fallback]
PythonTLSSnapshot: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/core/PythonFallbackKernel.cpp:152 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/functorch/DynamicLayer.cpp:487 [backend fallback]
PythonDispatcher: registered at /opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/core/PythonFallbackKernel.cpp:148 [backend fallback]

could you try out our new tool? Quantization — PyTorch main documentation the fx graph mode quant tool is in maintainence mode so any issues you found we may not be able to spend time fixing them.