nn.CTCLoss negative loss

Hello everyone, I wonder if someone could help me with this. I created a mini test with pytorch.nn.CTCLoss, and i don’t know why it return negative.

import torch
from torch import nn

print(torch.__version__)    # 1.3.1

# Initialize random batch of input vectors, for *size = (T,N,C)
input = torch.FloatTensor([[[0.1, 0.6, 0.1, 0.1, 0.1], [0.1, 0.1, 0.6, 0.1, 0.1]]]).transpose(0, 1)
input_lengths = torch.IntTensor([2])

labels = torch.IntTensor([[1, 2]])
label_sizes = torch.IntTensor([2])

ctc_loss = nn.CTCLoss()
loss = ctc_loss(input, labels, input_lengths, label_sizes)
print(loss.item())  # -0.6000000238418579

nn.CTCLoss expects log probabilities as the input as described in the docs.
If you call ìnput = input.log_softmax(2)`, you’ll get a positive loss value.

1 Like

I got it. Thank for your help