Float128 from numpy

Hi,

I need the float128 precision (which not need cuda or any GPU development).
I try this code :

a = np.zeros(10,dtype=np.float128)
b = torch.tensor(a)
Traceback (most recent call last):
File “”, line 1, in
TypeError: can’t convert np.ndarray of type numpy.float128. The only supported types are: float64, float32, float16, complex64, complex128, int64, int32, int16, int8, uint8, and bool.

In the near futur do you think that it will be possible to used the numpy float128 inside torch ?

Thanks for any answer,

Could you describe your use case for np.float128 a bit more, please?
Often math libraries do not support the float128 type and a lot of operations might thus be failing as mentioned in this request.

Thanks for your answer @ptrblck ,

I am doing chain matrix operations to try to understand how neural networks work and how to make them more efficient (XAI, pruning by low rank method, channel pruning…). For that I need to invert very large matrices with eigenvalues having a very large range. As the matrix inversions are performed iteratively, the error is propagated.

I use torch because it is easy to use thanks to the torch.fx module to automatically analyse neural networks and because I find it very efficient in terms of use of computing resources. The calculations are very fast and easy to use in conjunction with the neural network, you can mix AI and explicit calculation easily. I’m just looking to reduce my numerical error and for that I’d like to use float128 while continuing to use torch. I saw that this type existed in numpy, I was wondering if it existed in torch or if it was possible to bring back a numpy type under torch even if you lose of course the GPU capabilities.

As you mention in the request It is true that under numpy not all compilers take float128 correctly into account but by choosing an adapted docker on my system it still works and it is in this context that I am trying to access more numerical precision.

If I try to resume:
Do you think there is a way to import a type unknown to torch but described in the numpy library given the coherency between the 2 libraries?

Thank you in advance for any answers you can provide on the subject. I’m sorry if I’m missing some important details, I’m not very good at how types are coded and used by the torch library. I am more of a simple user who has a problem with numerical accuracy.

Thanks for describing your use case in detail.
I think one possible approach would be to stick to numpy and write custom Autograd.Functions wrapping these operations as described e.g. here. The disadvantage would be the need to implement the backward pass for each operation manually as Autograd does not work directly with numpy arrays, but it might allow you to use the wide dtype from numpy with the utils. from PyTorch.
Would this approach work for you?

This approach seems right to me. I will try it and I think it will work even if it will require, as you said, some recoding work to compensate for the autograde. Thank you very much @ptrblck

I just want to add for possible torch developers in case they read these lines that if one day float128 become naturally compatible with torch (natively) like float64 I know several users who will be delighted even if it excludes of course the use of GPU.