Problems with custom loss function

Hello!

I’m writing custom loss function, which looks like this:

class CustomLoss(object):
    def __init__(self, alpha, beta):
        self.alpha = alpha
        self.beta = beta
        
    def calc(self, first, second, k):
        u = torch.exp(-self.beta * F.pairwise_distance(torch.reshape(first, (first.size(0), -1)), 
                                                       torch.reshape(second, (first.size(0), -1))))
        
        D1 = torch.sum(u * torch.log((self.alpha * u) / ((self.alpha - 1) * u + k)))
        D2 = k * torch.log((self.alpha * k) / ((self.alpha - 1) * k + u))
        D2[torch.isnan(D2)] = 0
        D2 = torch.sum(D2)
        
        return D1 + D2

However, after the first optimizer.step() the loss is equal to nan. Can you tell me what the problem might be?

Thanks in advance!

Could you check if torch.log get a zero or negative input and might thus return a -Inf or NaN, respectively?

1 Like

Yes, maybe, that’s why I end up assigning 0 in the case of NaN:

D2[torch.isnan(D2)] = 0
D2 = torch.sum(D2)