Loss function to prefer nearby classes

Hey, I am looking for a loss function that prefers nearby classes, so for example I have 10 classes, and a sample has class 6, then class 5 and 7 should receive a relatively low loss, while further classes should receive an increasingly higher loss value. (bonus points if I can tweak the penalty based on how far away a class is or something)

(quick note, I am only just getting into all of this, so sorry if this is common knowledge, I did not manage to find it…)

Hi Koen!

Consider MSELoss.

You can always write your own loss function. (If you implement using
only differentiable pytorch tensor functions you don’t have to package
it as an official torch.autograd.Function nor implement explicit
forward() and backward() methods.)


K. Frank