Error in usage of customized loss Fun : leaf variable has been moved into the graph interior

I tried to construct my own Loss function. But i got the error “leaf variable has been moved into the graph interior”, when the program run loss.backward(), i tried to debug by step , but i didn’t solve the problem, the codes shows in following parts.

definition of MyLoss:

def hammingDistance(x, y):
        return torch.sum(x!=y)

def MyLoss(data, pred):
    temp_loss = Variable(torch.tensor(0.).cuda(), requires_grad=True)
    Ld = Variable(torch.tensor(0.).cuda(), requires_grad=True)
    Lh = Variable(torch.tensor(0.).cuda(), requires_grad=True)
    for inxi, hi in enumerate(data, 0):
        tempLd = torch.norm(torch.abs(hi)-1.)
        Ld = Ld + tempLd
        for inxj, hj in enumerate(data[inxi+1 :], inxi+1):
            if pred[inxi] == pred[inxj]:
                Rij = 1.0
            else:
                Rij = 0.0
            tempLh = (Rij * hammingDistance(hi, hj) + (1.0 - Rij) * max(180.0 - hammingDistance(hi, hj), 0.))*0.5
            Lh = Lh + tempLh
    temp_loss = 0.5 * Ld + Lh
    return temp_loss

and the ececute statement:

outputs = net(temp_inputs)
loss = MyLoss(outputs, temp_labels)
loss.backward()
optimizer.step()

the main error messages:


  File "F:/2-编程练习/py_practice/PolyDatabase/codes/deepHash.py", line 180, in <module>
    loss.backward()

  File "C:\Users\Sherlock_PC\Anaconda3\lib\site-packages\torch\tensor.py", line 102, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)

  File "C:\Users\Sherlock_PC\Anaconda3\lib\site-packages\torch\autograd\__init__.py", line 90, in backward
    allow_unreachable=True)  # allow_unreachable flag

RuntimeError: leaf variable has been moved into the graph interior

Thanks in advance

I’ve solved the problem. The error occurred in another function.

 def MySignFun(self, vx):
        res = Variable(torch.zeros_like(vx).cuda(), requires_grad=True)
        for index, data in enumerate(vx, 0):
            res[index] = torch.FloatTensor([self.signJudge(x) for x in data])
        return res

change the statement

        res = Variable(torch.zeros_like(vx).cuda(), requires_grad=True)

to:

        res = Variable(torch.zeros_like(vx).cuda())

and then, it works.

Hope to help you if you encountered the same problem.

1 Like