Accuracy drop after prepare_qat_fx with no quantization

I want to quantize only backbone from the whole model (simple timm feature extractor) and I get bad results (low metrics) during training so I decided to make sure that the model will have similar to original accuracy if I set qconfig for the backbone (the only quantized part) to None so that no quantization should be applied.

    qconfig_mapping = get_default_qat_qconfig_mapping("x86")
    qconfig_mapping \
        .set_module_name("backbone", None)
    ...
    model.backbone = prepare_qat_fx(model.backbone, qconfig_mapping, example_inputs)

but it has appeared that the model still shows bad accuracy (around half of original accuracy) and I’m wondering what’s wrong?
I did check that my module_name is correct so that after calling prepare_qat_fx I don’t have any quantization parameters (like scale, zero_point etc) inside my backbone.
Any thoughts?

Hi @Andrew_Holmes can you share a link to a gist to reproduce this error?

you have a global_qconfig that is being used because backbone is not the name of any modules in your model

e.g. if your model was

root module
> backbone
>>vertebrae
> frontbone
>>sternum

if you did .set_module_name(“backbone”, None) but left the global qconfig as default, and then applied quantization to the root model, it would not do quantization to the backbone module or its child (vertebrae), but would apply quantization to frontbone and its child sternum.

if you instead pass only model.backbone to the quantizer, your model now looks like this

root model
>vertebrae

so since there’s no module named backbone, it does quantization to vertebrae.