I want to develop my own NLL_loss which will penalize certain errors by their severity (I want some errors to be punished more than others).
To this end I need to alter the source code of pytorch’s NLL_loss, but it is written in C and I can’t seem to find it.
I found an implementation of CrossEntropyLoss here:
and when I test it against the CrossEntropyLoss in Pytorch it gives the same output, although when I use weighted samples the output is different, thus my starting point for altering the code is bad.
My main points are:
- Is there any out of the box functionality in Pytorch to penalize certain mistakes over others?
- If not, I can implement this myself, but I don’t understand how NLL_loss in Pytorch deals with sample weights, which prevents me from achieving my first goal of creating the exact same loss function with my own tweakable implementation.