Export fp16 model to ONNX

Exporting fp16 Pytorch model to ONNX via the exporter fails. How to solve this?

Most discussion around quantized exports that I’ve found is on this thread. However, most users are talking about int8 not fp16 - I’m not sure how similar the approaches/issues are between the two precisions