Alternative to torch.inverse for 16 bit


torch.inverse() doesn’t work with half precision.

Is there an alternative way we can compute the inverse which could work? I known maybe it’s impossible with stability issues?

I’m not aware of any backends implementing half-precision inverse off the shelve. Part of that might be stability.

Then a practical question would be what your use case is and what you hope to get from it. Is it for a certain problem shape? What is it that makes you prefer half over single precision? What do you need the inverse for?

Anecdotally, way back when I studied numerical linear algebra and analysis at the university, they used to say that when you explicitly compute the inverse, you’re doing it wrong. Now that may not be the case here for you, but I must admit it seems very special-purpose to use the explicit inverse while needing low-precision.

Best regards


I need to use covariance inverse in my optimization. Is there a way I could implement inverse in bfloat16?
And I need low precision for large model training.