I tried to search for this question in the internet, but I didn’t find a strict answer. I’m confused.
How is the cross entropy loss is calculated using torch.nn.CrossEntropyLoss() ?
Is it the sum of log probabilities of the correct class? or is it the sum of log probabilities of the correct class + log of (1 -probabilities) of the wrong classes?
Because in the first case it will a negative log likelihood if I get it correctly.