I’m just wondering if there is a way to export a model trained using quantisation aware training to onnx? There seem to be conflicting answers in various places saying that its not supported, and others that it is now supported. Is there some true answer for this?
If it is supported, is there an example somewhere? Do we export the “prepared” model, or the “converted” model?