Pytorch CTCLoss Error

When I try to use torch.nn.CTCLoss , with reduction='none' , I get the following error.

Now, there is no way I can pass size_average and reduce as parameters to pytorch CTC Loss. I am assuming that these values must be ambiguous right now, hence the error.
Any suggestions?

Do we have anyway to pass these parameters through CTCLoss so that they are not ambiguous?

My CTCLoss looks something like this:

torch.nn.CTCLoss(reduction='sum', zero_infinity=True, **kwargs)

Could you post a code snippet to reproduce this issue?
Based on the screenshot you’ve posted it looks like reduce and size_average is used, which are the legacy arguments, so which PyTorch version are you using?

PS: You can add code snippets directly by wrapping them into three backticks ``` :wink:

Hey, pytorch version is 1.3.1

Code - asr ctc

Unfortunately, the code is not executable. Could you create dummy inputs, so that we could debug this issue?
Using your default values (reduction='none', zero_infinity=True), this dummy code snippet works:

ctc = torch.nn.CTCLoss(reduction='none', zero_infinity=True)

T = 50      # Input sequence length
C = 20      # Number of classes (including blank)
N = 16      # Batch size
S = 30      # Target sequence length of longest target in batch
S_min = 10  # Minimum target length, for demonstration purposes
# Initialize random batch of input vectors, for *size = (T,N,C)
input = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_()
# Initialize random batch of targets (0 = blank, 1:C = classes)
target = torch.randint(low=1, high=C, size=(N, S), dtype=torch.long)
input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long)
target_lengths = torch.randint(low=S_min, high=S, size=(N,), dtype=torch.long)

loss = ctc(input, target, input_lengths, target_lengths)

I have created a small dataset.
Code - debug asr
Code before ‘Model’ is just creating the databunch, so can run it as it is.

Update: CTC works fine. There’s some issue with how I have defined the model.