Cross entropy loss multi target

hello, I want to use one-hot encoder to do cross entropy loss

for example

input:
[[0.1, 0.2, 0.8, 0, 0],
[0,0, 2, 0,0,1]]

target is
[[1,0,1,0,0]]
[[1,1,1,0,0]]

I saw the discussion to do argmax of label to return index,
but I have multiple 1s in one row, argmax will only return 1,

how do I solve this problem?

You cannot use nn.CrossEntropyLoss for a mulit-label classification and would need to use nn.BCEWithLogtisLoss.
To do so you can keep the shape of the target tensor and transform it to a FloatTensor via target = target.float().

Beware of copying it, proper spelling is torch.nn.BCEWithLogitsLoss

1 Like