nn.NLLLoss() gives negative result - what it's mean?

I saw code which use nn.NLLLoss() (negative log likelihood loss).

I looked on the results and some loss results (result of nn.NLLLoss()) return negative values.

What is the meaning of negative values of this loss?

I don’t know what a negative loss would represent and guess your inputs are in the wrong range as seen in this example:

criterion = nn.NLLLoss()

output = F.log_softmax(torch.randn(10, 10), dim=1)
target = torch.randint(0, 10, (10,))
loss = criterion(output, target)
print(loss)
# tensor(2.6895)

# wrong inputs
output = F.softmax(torch.randn(10, 10), dim=1) 
loss = criterion(output, target)
print(loss)
# tensor(-0.1101)

nn.NLLLoss expects log probabilities as the model output so make sure F.log_softmax was applied on the model output.

1 Like