One of the variables needed for gradient computation has been modified by an inplace operation,customize loss function

Concordance Correlation Coefficient (CCC) as loss function. does not decreace, any ideas

def __init__(self):
    super(ConcordanceCorCoeff, self).__init__()
    self.mean = torch.mean
    self.var = torch.var
    self.sum = torch.sum
    self.sqrt = torch.sqrt
    self.std = torch.std
def forward(self, prediction, ground_truth):
    mean_gt = self.mean (ground_truth, 0)
    mean_pred = self.mean (prediction, 0)
    var_gt = self.var (ground_truth, 0)
    var_pred = self.var (prediction, 0)
    v_pred = prediction - mean_pred
    v_gt = ground_truth - mean_gt
    cor = self.sum (v_pred * v_gt) / (self.sqrt(self.sum(v_pred ** 2)) * self.sqrt(self.sum(v_gt ** 2)))
    sd_gt = self.std(ground_truth)
    sd_pred = self.std(prediction)
    numerator=2*cor*sd_gt*sd_pred
    denominator=var_gt+var_pred+(mean_gt-mean_pred)**2
    ccc = numerator/denominator
    return 1-ccc

The loss seems to be decreasing:

criterion = ConcordanceCorCoeff()
model = nn.Linear(10, 10)
data = torch.randn(10, 10)
target = torch.randn(10, 10)

optimizer = torch.optim.SGD(model.parameters(), lr=10.)

for epoch in range(10):
    optimizer.zero_grad()
    output = model(data)
    loss = criterion(output ,target)
    loss.mean().backward()
    optimizer.step()
    print('Epoch {}, loss {}'.format(epoch, loss.mean().item()))

> Epoch 0, loss 0.9816429018974304
Epoch 1, loss 0.4328025281429291
Epoch 2, loss 0.43783488869667053
Epoch 3, loss 0.49120745062828064
Epoch 4, loss 0.5858393907546997
Epoch 5, loss 0.4273451864719391
Epoch 6, loss 0.11961951106786728
Epoch 7, loss 0.10366281121969223
Epoch 8, loss 0.6455065011978149
Epoch 9, loss -0.01740434765815735

I assume the error in the title is solved already, as I cannot find any inplace operations?

Also, I’m not familiar with this loss function, but it will return a NaN value, if you pass a single sample (but that might be an illegal use case).