Lin's Concordance Correlation Coefficient as loss function

I am trying to use Lin’s Concordance Correlation Coefficient as loss function but it seems it is not work correctly as the loss value using this cost function does not change meaningfully. Can anyone help me with some advices?
Here is my code:

class ConcordanceCorCoeff(nn.Module):

def __init__(self):
    super(ConcordanceCorCoeff, self).__init__()
    self.mean = torch.mean
    self.var = torch.var
    self.sum = torch.sum
    self.sqrt = torch.sqrt
    self.std = torch.std
def forward(self, prediction, ground_truth):
    mean_gt = self.mean (ground_truth, 0)
    mean_pred = self.mean (prediction, 0)
    var_gt = self.var (ground_truth, 0)
    var_pred = self.var (prediction, 0)
    v_pred = prediction - mean_pred
    v_gt = ground_truth - mean_gt
    cor = self.sum (v_pred * v_gt) / (self.sqrt(self.sum(v_pred ** 2)) * self.sqrt(self.sum(v_gt ** 2)))
    sd_gt = self.std(ground_truth)
    sd_pred = self.std(prediction)
    numerator=2*cor*sd_gt*sd_pred
    denominator=var_gt+var_pred+(mean_gt-mean_pred)**2
    ccc = numerator/denominator
    return 1-ccc

I’m not sure, but I have a feeling that using the same function might make the autograd history graph incorrect. Instead of using self.mean try calling torch.mean (same for var, sqrt, etc - basically erase the entire init) and see if you still get the problem…

Thanks for your statement. But, my code was true. There was something wrong with my model.

I try to use this loss function, but I found that the value of ccc would be negative…I want to know the result of your method.Thanks!