Hello!
The link provided leads to a general documentation page, not a concrete passage.
The current documentation of the NLLLoss still generates a vague understanding of how the weights for classes are perceived.
For instance, as per text on this docspage, NLLLoss — PyTorch 2.1 documentation
the function is calculated from input with |C| rows whose values are purportedly softmax probabilities for each of the |C| classes. It is natural to suppose that the i_th element of the weights vector passed to
loss would match the values i_th column of input. But the torch NN modules do not explicitly indicate which final layer units (i.e. which columns of input to the loss) are related to which classes.
So which principle does torch utilise to map the weights onto classes in an arbitrary problem setup?
In my case, the weights are [0, 1] and after some trials it is now self-evident to me that the first element in weights pertains to class 0 and the second element pertains to class 1. But that’s probably the most simple case with the least ambiguity.
The reply of @done1892 in this post
gives an example for the classification problem with classes [0…4] and the author of the current post considered an example of classification with classes [0…10].
But here, the classes do not contain negative values and they are well ordered.
In a more general situation (e.g. with classes [-11, 0, 4]) would torch assign the weights to classes in an ascending manner, i.e. w1 to ‘’-11", w2 to “0”, w3 to “4”?
Thanks in advance for your response.