Is torch.logsumexp accurate for very negative inputs?

I have some numerical issues and I am trying to find out if there is a bug in my calculation or not. I am using a model which samples weights and therefore I am using logsumexp over the sample dimension such as this if we assume I have a tensor of (sample_n, instances)

ll = torch.logsumexp(log_lik.sum(dim=1), dim=0)
ll = ll - np.log(sample_n)
ll = ll / instances

My predictions are extremely low probability because they are out of distribution samples so the log probability value can be something like -100,000. When I try to do the logsumexp operation without using the torch function then exp(-100000) just gives 0.

In this case I would expect that the output of logsumexp would then be -inf since it is taking the log of 0, but it always comes up with some answer. How can this be?


It is actually the whole point of having a single function that does log + sum + exp, it is much more precise than doing them one after the other.
The trick is to remove the maximum value from the values in the exp. You can move it out of the sum and do log(exp(max_val)) = max_val. And the only part that you compute the exp/sum/log for are the differential wrt to that max value. But these differentials are smaller and will lead to smaller numerical error.

1 Like