Set qconfig = None but QDQ still appears

Hi, I’ve set the qconfig = None for the Detect layer of YOLOv5

However, when exported to ONNX, the weights still appears to be quantized?

How do i have the Detect layer be fully non-quantized?

I am doing this as the post processed QAT model does not have weights attached to the Conv2Ds of the Detect layer, which I am guessing is causing further exporting issues? The ‘+’ sign is missing form the weights

Thanks!

can you provide more context? how are you quantizing the model exactly, its hard to answer about what’s going wrong without understanding what you are specifically doing.

  1. i added QDQs in the Conv class
  2. fuse the model [conv bn relu]

image

You haven’t quantized the model in what you’ve walked through so far. Is there a convert step somewhere that you haven’t listed?

Can you provide a repro that demonstrates the error?

Yup! the actual quantization happens in ONNX. I’m referring to this GitHub - neuralmagic/sparseml: Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models repo for the ONNX quantization process. But that isnt the main issue here

Would you have a small reproducible example to demonstrate the issue? It would be hard to debug / provide some recommendation without more context.