Export INT8 Quantized model to ONNX/Openvino

Can you suggest a way regarding how I can export an INT8 Quantized PyTorch model to ONNX/Openvino?

I found GitHub - openvinotoolkit/nncf: Neural Network Compression Framework for enhanced OpenVINO™ inference

Do you have a better suggestion?