Numerically stable log CDF of normal distribution

I don’t exactly know how this works, but if I recall correctly, one is ought to directly use Normal.log_prob(value) instead of manually taking the log of the probability, as it is numerically more stable that way.

Now I need to take the log of the CDF of a normal distribution, and NaNs are appearing somewhere in my training pipeline. I’m guessing it is because of a similar problem (but let me know if that doesn’t make sense).

What would be the best way to compute the log CDF in a numerically stable way? Similarly, I also need log(1 - Normal.cdf(value).

I know SciPy has logcdf() and logsf() for distribution classes, where ‘sf’ is ‘survival function’ i.e. 1 - cdf. In PyTorch, would I do best to just implement those myself?

1 Like

Does anyone familiar with the distribution classes know what the best approach would be?

I would just use one of the known approximations for the normal CDF. In a project of mine, I needed a better log erfc, which is essentially the same (see here). Here’s a separate list of erc/erfc approximations. Just pick one that has characteristics you like and which is amenable to a stable log transformation. For me, I used the approximation from Karagiannidis & Lioumpas (2007) (from the erfc approximation link) but splitting/canceling with the log as much as possible. This has small relative error if x is a bit larger than 0; the larger relative error near 0 didn’t matter for me. This is another good thread to take a look at.