Thanks for your answer @ptrblck ,
I am doing chain matrix operations to try to understand how neural networks work and how to make them more efficient (XAI, pruning by low rank method, channel pruning…). For that I need to invert very large matrices with eigenvalues having a very large range. As the matrix inversions are performed iteratively, the error is propagated.
I use torch because it is easy to use thanks to the torch.fx module to automatically analyse neural networks and because I find it very efficient in terms of use of computing resources. The calculations are very fast and easy to use in conjunction with the neural network, you can mix AI and explicit calculation easily. I’m just looking to reduce my numerical error and for that I’d like to use float128 while continuing to use torch. I saw that this type existed in numpy, I was wondering if it existed in torch or if it was possible to bring back a numpy type under torch even if you lose of course the GPU capabilities.
As you mention in the request It is true that under numpy not all compilers take float128 correctly into account but by choosing an adapted docker on my system it still works and it is in this context that I am trying to access more numerical precision.
If I try to resume:
Do you think there is a way to import a type unknown to torch but described in the numpy library given the coherency between the 2 libraries?
Thank you in advance for any answers you can provide on the subject. I’m sorry if I’m missing some important details, I’m not very good at how types are coded and used by the torch library. I am more of a simple user who has a problem with numerical accuracy.