Could not run 'aten::_slow_conv2d_forward' with arguments from the 'QuantizedCPU' backend

I was trying I was trying to quantize my model I mainly used model structure from here:Quantisation example in PyTorch · GitHub
I copied a custom myConv 2d layer from the same Conv2d and added the original torch.nn code in the source.
I used Eager mode post training static quantization. After convert(), I got the error when I tried to do inference.

NotImplementedError: Could not run ‘aten::_slow_conv2d_forward’ with arguments from the ‘QuantizedCPU’ backend. This could be because the operator doesn’t exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit Internal Login for possible resolutions. ‘aten::_slow_conv2d_forward’ is only available for these backends: [CPU, CUDA, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMeta, AutogradMTIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher].

Could some help me please? Thanks!

Hi Soudabeh, this is a common error a lot of our users have run into. Please see if this is helpful Quantization — PyTorch 2.0 documentation.