Quantization backend selection failed

Hello
I think I came across some problem when selecting quantization backend. From the documents I know pytorch select “x86” by default, but I was learning “fbgemm” code recently, so I change it to “fbgemm”. But the code still use oneDNN as backend, I don’t know why?
My env: Mac M1
Pytorch version: 2.0.0

here is my code:

    model_fp32_fused.qconfig = torch.ao.quantization.get_default_qconfig('fbgemm')
    model_fp32_prepared = torch.ao.quantization.prepare(model_fp32_fused, inplace=True)
    model_int8 = torch.ao.quantization.convert(model_fp32_prepared)
    model_int8.eval()
    inputs = torch.randn(1, 1, 59, 13) 
    output = model_int8(inputs)

image
from setup.py I saw this, does this mean that if I want to change the quantization backend, I can’t just change the default_qconfig, I have to recompile the total pytorch source code ? is this right?

you can do torch.backends.quantized.engine = "fbgemm" to change the backend, here is the doc: Quantization — PyTorch 2.0 documentation

hello thanks for relying, after I change the engine to fbgemm, still error , the error message is:

RuntimeError: quantized engine FBGEMM is not supported

what does this mean, My mac book don’t support fbgemm?

maybe it’s because FBGEMM is not compiled, please make sure USE_FBGEMM is 1 in the generated CMakeLists.txt files