Custom loss function(Noise Reduction Loss)

I am trying to write a custom loss function(Noise Reduction Loss) in PyTorch. It is very similar to cross-entropy loss with the difference that it puts some confidence in the answer it predicts(highest probability in the predicted matrix), assuming some labels are incorrect in training data. Here pred represents the predicted [m * L] matrix where m is the number of examples and L is the number of labels, y_true is [m*1]matrix of actual labels and “ro” is a hyperparameter deciding the impact of the each of the two criterions used.

def lossNR(pred, y_true, ro):
outputs = torch.log(pred) # compute the log of softmax values
out1 = outputs.gather(1, y_true.view([-1,1])) # pick the values corresponding to the labels
l1 = -((ro)* torch.mean(out1))
l2 = -(1-ro) * torch.mean((torch.max(outputs,1)[0]))
print(“l1=”, l1)
print("l2 = ", l2)
return (l1+l2)

I have tried the loss function on various datasets but it does not work any good. Please provide suggestions.

Hi,

Is that expected that both l1 and l2 will have the same sign?
Also why were you expecting this to perform well?

l1 and l2 both have logarithm of a value less then 1 and some other constants. Hence they will have the same sign.
It should work as it is almost same as Categorical CrossEntropy and concept is taken from a recent CVPR research paper. I think there is some flaw in implementation.
If I put ro=1, then it will become Categorical CrossEntropy. Am I correct?

Cross entropy already contains the log softmax doc so it won’t be the same.
But what you compute will be close the NLLLoss with mean reduction from what I can see yes.