If I have tensors x
and y
, and I want to compute z = log(exp(x) + exp(y))
, I can use z = torch.logaddexp(x, y)
.
Suppose now I have the tensors z
and y
, and I want to compute x = log(exp(z) - exp(y))
.
In other words, I want to compute the inverse function of z = logaddexp(x, y)
given y
.
Is there a torch
function/method that can do this with good numerical stability for large/small numbers?
I tried searching for logsubexp
and logdiffexp
but nothing came up. One solution would be to do x = z + torch.log1p(-torch.exp(y-z))
, but this is not ideal because computing exp(y-z)
could overflow/underflow in some cases.
After a few experiments with different inputs/dtypes, z + torch.log1p(-torch.exp(y - z))
seems to be a reasonably good solution.
If z = logaddexp(x, y)
, it’s safe to assume z > y
, so (contrary to what I said above) exp(y-z)
cannot overflow.
It can underflow if z >> y
, but in this case z ~= x
, and also z + log1p(-exp(y - z)) ~= z + log1p(-0) == z ~= x
, so the answer is still accurate.
For any given precision, as x
gets smaller, eventually z == y
for all x
in floating point representation. So for very small x
and z ~= y
, the exact inverse cannot be computed exactly anyway. In this case z + log1p(-exp(y - z)) ~= z + log1p(-exp(0)) == z + log1p(-1) == -inf
. So I think this solution is as good as can be hoped for.