I implemented a torch version by translating the scipy source code to python and it works the same as the scipy version. I wonder if we could integrate the new code to the current torch.distributions.Normal. Thank you!
Could you explain the numerical instability a bit? I.e. where is it observed and how scipy deals with it?
I’m sure a PR would be welcome, if the instability is reproducible or breaks certain use cases.
Numerical instability issues occur when the input to the inverse CDF is very close to 0 or 1. scipy handles these cases with approximations and here is the screenshot of the comments in scipy’s source code:
Thanks for the update!
Would you like to create a feature request on GitHub and post the explanation there, so that we could discuss it there with the module owners and track it?
Btw. thanks for digging into this.
DoubleTensor does not solve the issue with erfinv; it fails at 1e-18. But torch.special.ndtri works to 1e-40 with float32, and to <1e-100 with float64.