The derivative for 'target' is not implemented

loss1 is ok, but loss2 makes the error “the derivative for ‘target’ is not implemented”
Why the error occurs ?

import torch
import torch.nn as nn
from torch.autograd import Variable
from torch.autograd import Variable
import torch.nn.functional as F

m = nn.Sigmoid()
loss = nn.BCELoss()

a = Variable(torch.Tensor([[1,2],[3,4]]), requires_grad=True)
y = torch.sum(a**2)
target = torch.empty(1).random_(2)
label = Variable(torch.Tensor([10]), requires_grad=True)
y.backward()
print(a.grad)
loss_fn = nn.BCELoss()
loss1 = loss_fn(m(y), target)
loss2 = loss_fn(m(y), label)

1 Like

You are passing a target tensor, which requires gradients (labels) in the second use case.
Since the derivative for the target is not supported for nn.BCELoss, you should detach it before calling loss_fn.

6 Likes

Thank you. It works.

import torch.nn as nn
from torch.autograd import Variable
import torch.nn.functional as F

m = nn.Sigmoid()
loss = nn.BCELoss()

a = Variable(torch.Tensor([[1,2],[3,4]]), requires_grad=True)
y = torch.sum(a**2)
target = torch.empty(1).random_(2)
#label = Variable(torch.Tensor([10]), requires_grad=True)
label = torch.ones(1, requires_grad=True)
y.backward()
print(a.grad)
loss_fn = nn.BCELoss()
loss1 = loss_fn(m(y), target)
loss2 = loss_fn(m(y), label.detach())

2 Likes