Onnx export from torch quantized model


i made an own my quantized resnet50 classification model.
i already checked that i can export fp32 trained model to onnx model.
and i tried to export int8 quantized model to onnx.
it works, but i wanna know that I wonder if Pytorch actually supports it now(export Quantized model to onnx).

i am using torch version 1.13