I build a Conv1D model to classify items into 6 categories (from 0 to 5).
I use therefore nn.CrossEntropyLoss that penalizes the errors with high probabilities but also the low probabilities associated to right answers. And that is what I was looking for.
But CrossEntropy considers the classes as independent, while I’d like reduce the loss for an error where model predicts 0 (cat) while it is a dog (1) and increase the loss when the model predicts 0 (cat) when the true answer is class 5 (fish).
Is there a way to embed the class proximity in the loss computation of a classification problem ?
Thanks in advance for any help !