Custom weighted NLL_loss

Hello,
I want to develop my own NLL_loss which will penalize certain errors by their severity (I want some errors to be punished more than others).
To this end I need to alter the source code of pytorch’s NLL_loss, but it is written in C and I can’t seem to find it.
I found an implementation of CrossEntropyLoss here:
https://gist.github.com/mjdietzx/50d3c26f1fd543f1808ffffacc987cbf
and when I test it against the CrossEntropyLoss in Pytorch it gives the same output, although when I use weighted samples the output is different, thus my starting point for altering the code is bad.

My main points are:

  1. Is there any out of the box functionality in Pytorch to penalize certain mistakes over others?
  2. If not, I can implement this myself, but I don’t understand how NLL_loss in Pytorch deals with sample weights, which prevents me from achieving my first goal of creating the exact same loss function with my own tweakable implementation.

Thanks all.

This is the source code (and 2D version).

The weights in NLLLoss are used to compute the weighted average (one weight per target class). I think the docs explain it clearly.