[Solved] Confusing classification loss results

Hello, I am attempting classification loss results. The examples in the documentation only show only random value inputs so it’s hard to gain perspective on the results I’m seeing. I would expect these results apart from probably the first nll_loss to be 0 since I am using the same tensor as an input expect encoded as one hot, yet I’m seeing 0.5514 for what should be 0 loss?

I am probably missing something obvious but in the past I’ve only dealt with regression tasks and these classification result just seem wrong? I’ve been trying to look this up but from all searches this should work as I expect yet it does not?

a = torch.tensor([0, 1, 2])
b = F.one_hot(a).float()

F.cross_entropy(b, a)            # returns 0.5514
F.nll_loss(b, a)                 # returns -1
F.nll_loss(F.log_softmax(b), a)  # returns 0.5514
nn.BCELoss()(b, b)               # returns 0

F.cross_entropy expects logits as the inputs and will apply F.log_softmax and F.nll_loss internally on b which is why line1 and line3 are giving equal results.
F.nll_loss expects log probabilities and gives an invalid loss value since you are passing one-hot encoded inputs to it.
nn.BCELoss expects probabilities for both inputs and thus returns a zero loss.

Yes, thank you. After some time I realized that onehot is a probability and to convert it to logit I need to apply a soft version of log(x / (1 - x)) which gave the close to zero results. I’ll changed the title to solved.