If the model is trained by mixed precision and saved by .half(), are there any accuracy losses between such model and its TRT/ONNX conversion?

Hi, we know that there are inevitable accuracy losses between the trained model (FP32 or TF32) and its FP16 deployment conversions, like TRT/ONNX.

When it comes to mixed precision training, I am wondering that could such trained FP16 model be able to perfectly align with its .half() saved model and moreover, its FP16 deployments?

Thanks : )