Pass Indices of 'ones' in target to the Loss Function for Multi Label Prediction

Consider the following situation for Multi Label Classification.

output = torch.tensor([[0.3416, 0.6336, 0.3775, 0.2556, 0.6288]])
target = torch.tensor([[1,0,1,0,1]])
target_indices = torch.tensor([[0,2,4]])

Is there a loss function where in I can calculate the CrossEntropyLoss in the following way:
loss(output, target_indices) ?

For a multi-label classification, you could use nn.BCEWithLogitsLoss with your target tensor directly (after casting it to a FloatTensor):

output = torch.tensor([[0.3416, 0.6336, 0.3775, 0.2556, 0.6288]])
target = torch.tensor([[1,0,1,0,1]]).float()

criterion = nn.BCEWithLogitsLoss()
loss = criterion(output, target)

Thanks for replying. But I want to pass the target indices, that is a constraint as I have a sparse matrix representation of binary vector and I wish to use that. Is there anyway to do it?

I don’t think you can use the index approach with the current nn.BCE(WithLogits)Loss implementation, since the target is a FloatTensor and thus contains the target for each data sample.

Would you be able to create the right target using scatter?

output = torch.tensor([[0.3416, 0.6336, 0.3775, 0.2556, 0.6288]])
target = torch.zeros_like(output)
target_indices = torch.tensor([[0,2,4]])
target.scatter_(1, target_indices, 1)
1 Like

Okay. Thanks. I’ll try that.