I am trying to write a custom loss function(**Noise Reduction Loss**) in PyTorch. It is very similar to cross-entropy loss with the difference that it puts some confidence in the answer it predicts(highest probability in the predicted matrix), assuming some labels are incorrect in training data. Here pred represents the predicted [m * L] matrix where m is the number of examples and L is the number of labels, y_true is [m*1]matrix of actual labels and “ro” is a hyperparameter deciding the impact of the each of the two criterions used.

def lossNR(pred, y_true, ro):

outputs = torch.log(pred) # compute the log of softmax values

out1 = outputs.gather(1, y_true.view([-1,1])) # pick the values corresponding to the labels

l1 = -((ro)* torch.mean(out1))

l2 = -(1-ro) * torch.mean((torch.max(outputs,1)[0]))

print(“l1=”, l1)

print("l2 = ", l2)

return (l1+l2)

I have tried the loss function on various datasets but it does not work any good. Please provide suggestions.