CrossEntropyLoss() function in PyTorch

Hello,
I tried to search for this question in the internet, but I didn’t find a strict answer. I’m confused.

How is the cross entropy loss is calculated using torch.nn.CrossEntropyLoss() ?

Is it the sum of log probabilities of the correct class? or is it the sum of log probabilities of the correct class + log of (1 -probabilities) of the wrong classes?

Because in the first case it will a negative log likelihood if I get it correctly.

Thanks

https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html

Note that this case is equivalent to the combination of LogSoftmax and NLLLoss.

Thanks for your reply!

I just wonder, what is the weight w_y_n that is multiplied with the log probability?

torch.nn.CrossEntropyLoss(weight=None, size_average=None, ignore_index=- 100, reduce=None, reduction='mean', label_smoothing=0.0)

Note that CrossEntropyLoss has an option weight=None and

  • weight (Tensor, optional) – a manual rescaling weight given to each class. If given, has to be a Tensor of size C