NaN in Pearson Correlation Coefficient loss function

Hi, I am trying to use Pearson loss function instead of MSE.
Here is the loss function

class Neg_Pearson_Loss(nn.Module):   
    def __init__(self):
    def forward(self, X, Y):       
        assert not torch.any(torch.isnan(X))
        assert not torch.any(torch.isnan(Y))
        # Normalise X and Y
        X = X-X.mean(1)[:, None]
        Y = Y- Y.mean(1)[:, None]
        # Standardise X and Y
        X = (X/ X.std(1)[:, None])+1e-5
        Y =(Y/ Y.std(1)[:, None])+1e-5
        #multiply X and Y
        return Z

But it gives nan loss.
I have normalized the label and feature between 0 and 1. The same code work for mse/mae loss function but give nan on Pearson loss function.
Any idea

You could be dividing by a zero stddev and might want to move the 1e-5 eps value into the denominator.

1 Like