Can AMP mixed-precision training reduce accuracy drop after converting model to TensorRT FP16?

Hi

I’m currently observing an accuracy drop when converting a PyTorch model to TensorRT with FP16 precision for inference.

I’m wondering whether training the model using mixed precision (PyTorch AMP) can help the model adapt to FP16 numerics and thus reduce or eliminate this accuracy loss after TensorRT FP16 conversion.