I try to use native amp support and realized that under my model sum() operation produces float32 from f16 inputs whereas all the other operations run with float16. I’d like to create an issue but I am not sure if it is an expected situation.
I try to use native amp support and realized that under my model sum() operation produces float32 from f16 inputs whereas all the other operations run with float16. I’d like to create an issue but I am not sure if it is an expected situation.