Element 0 of tensors does not require grad and does not have a grad_fn costume loss

costume loss:

def loss_function(p, y, b):
    losses = 0
    for i in range(b):
        k = len(y[i])
        loss1 = -(torch.ones(k).cuda()-y[i])@((torch.ones(k).cuda()-p[i]).log())
        l = y[i]*(p[i].log())
        prod = (torch.ones(k).cuda()-y[i]) - l
        loss2 = loss1 + torch.max(torch.ones(k).cuda()-prod)
        losses = loss1 + loss2
    return Variable(losses/(b))

dont know how to correct this error

Remove Variable(), it resets the tensor graph and makes it a leaf tensor.
Variable is deprecated since pytorch 0.4. Soo unless you use pytorch 0.3 forget it :slight_smile:

Still the same error…

Hi,
Could you provide a reproducible snippet?
Right now I think the issue is that neither p or y requires grad.

you want to see what the p and the y are?

Check before calling the function that p.requires_grad == True nor the same for y

1 Like