CTCLoss backward path

Hi all,

I was trying to use pytorch wrapper of CTCLoss function (GitHub - SeanNaren/warp-ctc: Pytorch Bindings for warp-ctc).
But it’s no works with actual master of pytorch. I run this sample code:

import torch
from torch.autograd import Variable
from warpctc_pytorch import CTCLoss
ctc_loss = CTCLoss()
probs = torch.FloatTensor([[[0.1, 0.6, 0.1, 0.1, 0.1], [0.1, 0.1, 0.6, 0.1, 0.1]]]).transpose(0, 1).contiguous()
labels = Variable(torch.IntTensor([1, 2]))
label_sizes = Variable(torch.IntTensor([2]))
probs_sizes = Variable(torch.IntTensor([2]))
probs = Variable(probs, requires_grad=True)
cost = ctc_loss(probs, labels, probs_sizes, label_sizes)
cost.backward()

And it’s broke with error:

Traceback (most recent call last):
File “ctc_test.py”, line 16, in
cost.backward()
File “/usr/local/lib/python3.5/dist-packages/torch/autograd/variable.py”, line 128, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File “/usr/local/lib/python3.5/dist-packages/torch/autograd/init.py”, line 83, in backward
variables, grad_variables, retain_graph, create_graph)
RuntimeError: expected Variable (got ‘torch.FloatTensor’)’

Anybody know how to fix this error?
Thanks in advance!

1 Like

Apparently some recent patch now requires returning Variables from torch.autograd.Function's backward method (before Tensor were fine). Try wrapping self.grads with torch.autograd.Variable(self.grads).

I’m also affected with my own autograd functions.

2 Likes

@vadimkantorov is right. This will be improved before 0.4 is released once Variable and Tensor are merged. The master branch is in an intermediate state where autograd Functions have to return Variables and not Tensors.

Hi, did you solve this error? I am facing the same problem.

Thank you! It’s working!