ONNX export after QAT


I’m just wondering if there is a way to export a model trained using quantisation aware training to onnx? There seem to be conflicting answers in various places saying that its not supported, and others that it is now supported. Is there some true answer for this?

If it is supported, is there an example somewhere? Do we export the “prepared” model, or the “converted” model?


hi @kazimpal87 , currently we do not officially support exporting quantized models via ONNX. We would definitely welcome contributions in this area.

1 Like