Perfect CrossEntropyLoss Prediction doesn't have 0 loss

Consider this example:

        criterion = nn.CrossEntropyLoss()

        y = torch.LongTensor([1, 1, 0, 0])
        x = torch.FloatTensor([[0., 1., 0.], [0., 1., 0.], [1., 0., 0.], [1., 0., 0.]])

        loss_reference = criterion(x, y)

        print(loss_reference)

It gives a loss of 0.55 while I would expect it to be 0.

Why is that?

Ok… I’ve realized that it is due to softmax https://stackoverflow.com/questions/49390842/cross-entropy-in-pytorch