The 'inf' problem in KLDivLoss function

There are two distributions q and p.

q = tensor([[ 0.6380,  0.3620,  0.0000],
            [ 0.5583,  0.1702,  0.2715],
            [ 0.6978,  0.1521,  0.1501]])

p = tensor([[ 0.2799,  0.3934,  0.3267],
            [ 0.3521,  0.2994,  0.3484],
            [ 0.1763,  0.5100,  0.3137]])

I need to computer the KLDivLoss between q and p.

kld_loss = F.kl_div(q.log(), p, size_average=False) 

I noticed q[0][2] is equal to 0, so the q.log() produced ‘inf’ value in the final results.

In order to convenient computation, can I fill 1e-8 to replace 0 ?

Is there a better solution?

Sorry, I don’t know how to solve it . Will you help me out?

Hi, where are your p and q coming from? If they are outputs of a network, it is highly unlikely to get 0 in a logit.