Is CTC loss badly defined?

According to CTC loss documentation, the target sequence length must be <= to input sequence length. This allows the case where target sequence length == input length.
But what if your target is constant? Then CTC needs to predict interleaved blanks. In the pessimistic case where target is constant and let’s say equal to N, then model needs to predict N blanks. So really, target sequence length must be <= half input sequence length right?

Also, is there a better alternative to unaligned, unsegmented sequence prediction?

In fact, if the number of consecutive items within a target, which equals to the number of “blanks” the model needs to predict, plus the size of the target, exceed the model output size, then you get either INF or NAN in CTC. So a sufficient condition for CTC to work always is to to have input_length >= 2 * target_length

@Peter_Featherstone , have you figured out with it?
An issue has been created?