Using 128 bit floating point datatype with Pytorch (not a complex number)

I am performing some operations in Pytorch that require a very high degree of precision. I am currently using torch.float64 as my default datatype. All my numbers are real. I would like to use a float128 datatype (since memory is not an issue for this simulation). However, the only datatypes I find in the documentation is torch.complex128 where the real and imaginary parts are both 64 bits. Is there a datatype or a way I can use all the 128 bits for my real numbers?

Thank you

1 Like

Hi,

We don’t have any support for float128 I’m afraid. I don’t actually think that this type is supported by cuda :confused:

If you don’t care about cuda, I guess we could accept a PR adding this new data type and implementations for it but no core contributor is working on that atm.
But you can open an issue with a feature request if you want to discuss this further.

Thanks a lot for your answer. For this problem, which is an optics simulation, I do not care about Cuda. I will open an issue with a feature request to discuss further :slight_smile:

1 Like

I am also working on an optimization problem for Molecular Dynamics simulations where I need high precision. After months of struggle to find out the bug in a seemingly correct code, I finally reached to the conclusion that the issue is with precision limitations of float64. It would be very helpful if support can be provided for float128 (equivalent to real128 in FORTRAN) in near future.

1 Like