I have some numerical issues and I am trying to find out if there is a bug in my calculation or not. I am using a model which samples weights and therefore I am using logsumexp over the sample dimension such as this if we assume I have a tensor of (sample_n, instances)
…
ll = torch.logsumexp(log_lik.sum(dim=1), dim=0)
ll = ll - np.log(sample_n)
ll = ll / instances
My predictions are extremely low probability because they are out of distribution samples so the log probability value can be something like -100,000
. When I try to do the logsumexp operation without using the torch function then exp(-100000)
just gives 0.
In this case I would expect that the output of logsumexp would then be -inf
since it is taking the log of 0
, but it always comes up with some answer. How can this be?