RuntimeError: blank must be in label range for CTC loss

optimizer.zero_grad()
        
        # get the output from the model
        output, h = mynet(specs, h)
        print(output.size())
        output = F.log_softmax(output, dim=2)
        output = output.transpose(0,1)
        # calculate the loss and perform backprop
        loss = criterion(output, labels, input_lengths, label_lengths)
        loss.backward()
        # `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
        nn.utils.clip_grad_norm_(mynet.parameters(), clip)
        optimizer.step()
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-132-fd3f6611addd> in <module>
     42         output = output.transpose(0,1)
     43         # calculate the loss and perform backprop
---> 44         loss = criterion(output, labels, input_lengths, label_lengths)
     45         loss.backward()
     46         # `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.

/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    548             result = self._slow_forward(*input, **kwargs)
    549         else:
--> 550             result = self.forward(*input, **kwargs)
    551         for hook in self._forward_hooks.values():
    552             hook_result = hook(self, input, result)

/opt/conda/lib/python3.7/site-packages/torch/nn/modules/loss.py in forward(self, log_probs, targets, input_lengths, target_lengths)
   1309     def forward(self, log_probs, targets, input_lengths, target_lengths):
   1310         return F.ctc_loss(log_probs, targets, input_lengths, target_lengths, self.blank, self.reduction,
-> 1311                           self.zero_infinity)
   1312 
   1313 # TODO: L1HingeEmbeddingCriterion

/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py in ctc_loss(log_probs, targets, input_lengths, target_lengths, blank, reduction, zero_infinity)
   2050     """
   2051     return torch.ctc_loss(log_probs, targets, input_lengths, target_lengths, blank, _Reduction.get_enum(reduction),
-> 2052                           zero_infinity)
   2053 
   2054 

RuntimeError: blank must be in label range
criterion = nn.CTCLoss(blank=28, zero_infinity=False)

Can somebody help me with this please? Thank you. I am using a GRU and then a fully connected layer, it classifies alphabet + Space.

Sorry for the reply not being able to answer your error, but I wanted to ask you what method are you using for CTC decoding?
Tensorflow as options like CTC beam search decoder, or CTC greedy search decoder, have you tried to use TensorFlow method while using base PyTorch implementation.

No, I am so new to this. I wouldn’t even know how to combine both.
For your reference for the decoder, please visit this blog:

It explains how to create decoder in pytorch.

I hope I helped you.

It was helpful :smiley:

1 Like

Hi @shwe87 - it may be a little bit too late now.
However, you could check your ‘char_index_policy’ - dict. The error you mentioned above is very likely a result of a missing blank label in this dict.

I hope I was able to help you.
Greetings, Unity05