Problems with Cross Entropy Loss

I am getting two different loss values in the following situations, although I expect them to be same.

  1. ce = torch.nn.CrossEntropyLoss(classWeights, reduction=‘none’)
    loss = ce(input, targets).mean()

  2. ce = torch.nn.CrossEntropyLoss(classWeights, reduction=‘mean’)
    loss = ce(input, targets)

What could be the reason for that?

You would have to normalize the unreduced and weighted loss with the classWeights as shown here.

1 Like