During QAT, how to save the float32 model without fuse module?

Hi, I tried QAT with this example https://github.com/pytorch/vision/blob/master/references/classification/train_quantization.py.
I would like to know whether the fp32 model can be saved during the quatization, which has no extra parameters learned by QAT?
This FP32 model without fusion, just like the normal pretrained float32 model.

are you asking if you can save a normal fp32 model? of course, you can find the doc for serialization here: https://pytorch.org/docs/stable/notes/serialization.html?highlight=save

Thanks!
I know how to save a normal fp32 model, but I don’t know how to save it during quantization ware training.Because the model which saved during QAT has some params such as scale , zero_points and so on.I want these params disappear in the model.But I don’t know how to change code to achieve it.

why do you want these params to disappear? we need to save scale/zero_point of qat model since it’s part of the state of the model