How to make NLLLoss possible in multitarget loss?


I want to make NLLLoss in pytorch to successfully treat multilabel target (e.g [0 0 1 0 1 1] ).

However I’ve got error that this loss is not for multitarget,
so I saw documents and find it considers input shape as (N,C) and target (N,).

I want to see how pytorch calculate NLLLoss since it expects values from log_softmax not softmax,

so thinking there may be difference with function I could build like below.

import torch
def NLLLoss(logs, targets):
ㅡout = torch.zeros_like(targets, dtype=torch.float)
ㅡfor i in range(len(targets)):
ㅡㅡout[i] = logs[i][targets[i]]
ㅡreturn -out.sum()/len(out)

But it seems it is made from C file, I don’t know C language sadly.

Is someone knows how NLLLoss is built in Pytorch (please change my code if possible…) or how to make nn.NLLLoss to successfully calculate for multitargetloss?

(I don’t want to use multimarginloss)

Hello Changrok!

I’m not sure what you mean by “multitarget.”

If you are working on a multi-label, multi-class classification problem,
then you don’t want to be using NLLLoss (or CrossEntropyLoss).
You should use BCEWithLogitsLoss.

The key idea is that a multi-label, multi-class problem (say, nClass
classes) can be understood as nClass binary classification problems.
That is, each sample has nClass binary predictions / labels: Class-1
is present or absent; class-2 is (at the same time as class-1) present
or absent, and so on.

If this doesn’t answer your question, please give a brief description of
the conceptual content of your task, and tell use the shapes of the
output of your model and your target (including the nBatch dimension).

Good luck.

K. Frank

1 Like