How to obtain a result of 0 from CrossEntropyLoss?

I want to test if my code correctly calculates the loss value. I think I first need to get a correct fixed loss value from CrossEntropyLoss.
But …
Why is the output of the following code not 0?

import torch
import torch.nn as nn

inputs = torch.tensor([[1., 0., 0.],
                      [0., 1., 0.],
                      [0., 0., 1.]])
labels = torch.tensor([0, 1, 2])

criterion = nn.CrossEntropyLoss()

loss = criterion(inputs, labels)

print(loss)

nn.CrossEntropyLoss expects logits (unbound), not probabilities, so pass large/small values instead of ones and zeros:

inputs = torch.tensor([[100., 0., 0.],
                      [0., 100., 0.],
                      [0., 0., 100.]])
labels = torch.tensor([0, 1, 2])

criterion = nn.CrossEntropyLoss()

loss = criterion(inputs, labels)

print(loss)
# tensor(0.)
1 Like

Hi @ptrblck_de,

How do we push forward or back a loss curve to be similar to a source loss curve and train at the same pace in Pytorch?

Thanks in advance

I don’t fully understand your use case. Could you describe what exactly you are working on?

1 Like