Mixed Precision or FP16 training in libtorch

Hello all,
Is there a way to train a model in mixed precision/ half-precision in libtorch using amp?
If there is no amp package in libtorch, can we somehow use the Nvidia Apex library in C++?

TIA

@a_d
I don’t think we support amp in libtorch at this moment. @mcarilli

You can find some workaround in this post.
Deploy mixed precision model in libtorch

I think Apex is a mix of its own libs and pytorch libs, not easy to use it purely from C++ I guess.

Thanks a lot for your reply @glaringlee, I will have look at it