Hi, I am trying to use Pearson loss function instead of MSE.

Here is the loss function

```
class Neg_Pearson_Loss(nn.Module):
#https://stackoverflow.com/a/19710598/11170350
def __init__(self):
super(Neg_Pearson_Loss,self).__init__()
return
def forward(self, X, Y):
assert not torch.any(torch.isnan(X))
assert not torch.any(torch.isnan(Y))
# Normalise X and Y
X = X-X.mean(1)[:, None]
Y = Y- Y.mean(1)[:, None]
# Standardise X and Y
X = (X/ X.std(1)[:, None])+1e-5
Y =(Y/ Y.std(1)[:, None])+1e-5
#multiply X and Y
Z=(X*Y).mean(1)
Z=1-Z.mean()
return Z
```

But it gives nan loss.

I have normalized the label and feature between 0 and 1. The same code work for mse/mae loss function but give nan on Pearson loss function.

Any idea