I am learning CTC Loss from CTCLoss — PyTorch 1.10.0 documentation,
If, I have 100 samples, then I got ctc loss for all samples or all samples in the batch
ctc_loss = torch.nn.CTCLoss()
loss = ctc_loss(input, target, input_lengths, target_lengths)
print(loss) -> tensor(7.8303, grad_fn=<MeanBackward0>)
But, How should I get CTC Loss for every sample instead of getting CTC Loss for all samples or batch ?