How can we export the model quantized by `PyTorch 2 Export Quantization` to the binary file?

I’m completely new to PyTorch.
I would like to export some public model quantized on my end by using PyTorch 2 Export Quantization to the file in order to execute it on various platform such as TVM or TensorRT.
Could you kindly share some example codes?

Example of pt2 export workflow:
https://pytorch.org/tutorials/prototype/pt2e_quant_ptq_static.html#convert-the-calibrated-model-to-a-quantized-model

Here’s an example of how to export to TensorRT, but this uses the old FX workflow
https://pytorch.org/TensorRT/_notebooks/vgg-qat.html

@jcaip

Thank you for your sharing!
They look helpful to understand.
I understood that a simple example that would execute the model quantized by PyTorch 2 Export with other platform does not yet exist in the document.