I am performing some operations in Pytorch that require a very high degree of precision. I am currently using torch.float64 as my default datatype. All my numbers are real. I would like to use a float128 datatype (since memory is not an issue for this simulation). However, the only datatypes I find in the documentation is torch.complex128 where the real and imaginary parts are both 64 bits. Is there a datatype or a way I can use all the 128 bits for my real numbers?