Although theoretically it is not possible, I’m getting negative loss when using ctc loss.

Running in batch gives the save results, so Ill share the shapes of my tensors for B=1:

**Log_prob** size is:[1500,73].

**Targets** size is:[18], and value:

tensor([16, 9, 10, 11, 2, 10, 45, 17, 72, 10, 17, 72, 13, 5, 72, 12, 35, 11]).

**Input_lengths** equal to tensor(1500).

**Target_lengths** equal to tensor(18).

This produces a loss of -1.3

My CTCloss is defined as:

```
self.loss = nn.CTCLoss(
blank=72,zero_infinity =True
)
```

As mentioned, the loss (and entire training) worked perfectly , only after 50k steps it suddenly became negative.

I’ve seen online that if the target size larger then input size this might create negative loss, but as you can see this is not the case.

I’ll be happy to share anymore information.

Thanks for your help!!!