Runtime Error: expected Variable or None (got torch.FloatTensor)

I’m using pytorch.crnn repo and recived the above error when running cost.backward()
Has anyone encountered this error? How could I solve it?

Could you post a small code snippet reproducing this error?

This is the model def:
import torch.nn as nn

class BidirectionalLSTM(nn.Module):

def __init__(self, nIn, nHidden, nOut):
    super(BidirectionalLSTM, self).__init__()

    self.rnn = nn.LSTM(nIn, nHidden, bidirectional=True)
    self.embedding = nn.Linear(nHidden * 2, nOut)

def forward(self, input):
    recurrent, _ = self.rnn(input)
    T, b, h = recurrent.size()
    t_rec = recurrent.view(T * b, h)

    output = self.embedding(t_rec)  # [T * b, nOut]
    output = output.view(T, b, -1)

    return output

class CRNN(nn.Module):

def __init__(self, imgH, nc, nclass, nh, n_rnn=2, leakyRelu=False):
    super(CRNN, self).__init__()
    assert imgH % 16 == 0, 'imgH has to be a multiple of 16'

    ks = [3, 3, 3, 3, 3, 3, 2]
    ps = [1, 1, 1, 1, 1, 1, 0]
    ss = [1, 1, 1, 1, 1, 1, 1]
    nm = [64, 128, 256, 256, 512, 512, 512]

    cnn = nn.Sequential()

    def convRelu(i, batchNormalization=False):
        nIn = nc if i == 0 else nm[i - 1]
        nOut = nm[i]
        cnn.add_module('conv{0}'.format(i),
                       nn.Conv2d(nIn, nOut, ks[i], ss[i], ps[i]))
        if batchNormalization:
            cnn.add_module('batchnorm{0}'.format(i), nn.BatchNorm2d(nOut))
        if leakyRelu:
            cnn.add_module('relu{0}'.format(i),
                           nn.LeakyReLU(0.2, inplace=True))
        else:
            cnn.add_module('relu{0}'.format(i), nn.ReLU(True))

    convRelu(0)
    cnn.add_module('pooling{0}'.format(0), nn.MaxPool2d(2, 2))  # 64x16x64
    convRelu(1)
    cnn.add_module('pooling{0}'.format(1), nn.MaxPool2d(2, 2))  # 128x8x32
    convRelu(2, True)
    convRelu(3)
    cnn.add_module('pooling{0}'.format(2),
                   nn.MaxPool2d((2, 2), (2, 1), (0, 1)))  # 256x4x16
    convRelu(4, True)
    convRelu(5)
    cnn.add_module('pooling{0}'.format(3),
                   nn.MaxPool2d((2, 2), (2, 1), (0, 1)))  # 512x2x16
    convRelu(6, True)  # 512x1x16

    self.cnn = cnn
    self.rnn = nn.Sequential(
        BidirectionalLSTM(512, nh, nh),
        BidirectionalLSTM(nh, nh, nclass))

def forward(self, input):
    # conv features
    conv = self.cnn(input)
    b, c, h, w = conv.size()
    assert h == 1, "the height of conv must be 1"
    conv = conv.squeeze(2)
    conv = conv.permute(2, 0, 1)  # [w, b, c]

    # rnn features
    output = self.rnn(conv)

    return output

This is where I do the backprop:

preds = crnn(image)
preds_size = Variable(torch.IntTensor([preds.size(0)] * batch_size))
cost = criterion(preds, text, preds_size, length) / batch_size
crnn.zero_grad()
cost.backward()
optimizer.step()
return cost

Thanks for the code!
Could you explain, what text is and which criterion you are using?

text is a Variable with torch.IntTensor

I’m using CTCLoss

Hi I am also facing same issue.

Thanks for the info.
Are you using @tom’s warp-ctc wrapper or another approach?

Did you solve this problem? I am facing the same error.

Hi

There was some issue with the pytorch version I was using. In wrap-ctc function : line 47,
modify return as return Variable(gradients) and it will work.

Thanks for your reply.

May I ask which PyTorch version you are using for crnn.pytorch project? Are you using the current version of SeanNaren/warp-ctc?

I successfully ran the crnn.pytorch project three months ago but now the training loss does not decrease at all.

I am using SeanNaren/wrap-ctc. For my data, loss is decreasing consistently with training time.

Could you elaborate more on this? I tried doing this, the error goes away but the loss is shown as constant 0.

What learning rate are you using? Even in my case with learning rate 0.01 loss was decreasing but accuracy was always 0. I changed the learning to 0.005 and I could see improvement in accuracy with training time.