KFrank
(K. Frank)
May 15, 2021, 1:39pm
2
Hi Abhilash!
Abhilash:
how do I adjust the penalty for misclassifying, for example, a 5 as a 6, because a human would do similar mistake as they look pretty close ( for a terrible image), or 1 as 7 in few images.
The most straightforward approach would be to use cross entropy as
your loss function, but to use probabilistic ("soft’) labels, rather than
categorical (“hard”) labels.
Pytorch does not have a soft-label version of cross-entropy loss built
in, but it is easy to implement one. See this post:
Hello Raaj!
I do not believe that pytorch has a “soft” cross-entropy function built in.
But you can implement it using pytorch tensor operations, so you should
get the full benefit of autograd and gpu acceleration.
See this (pytorch version 0.3.0) script:
import torch
torch.__version__
# define "soft" cross-entropy with pytorch tensor operations
def softXEnt (input, target):
logprobs = torch.nn.functional.log_softmax (input, dim = 1)
return -(target * logprobs).sum() / input.shap…
This post explains how to use soft labels to reduce the penalty for certain
misclassifications:
Hi macazinc!
Here’s what I think you’re asking:
You have a multi-class (30 classes) classification problem. You know
that for most of your classes, your ground-truth target labels are
correct, but your labels sometimes mix up two of your classes, say 4
and 9, Let’s say that a sample labelled 4 is actually a 9 25% of the
time, and that a sample labelled 9 is actually a 4 10% of the time.
You will (most likely) want to use cross-entropy loss, but pytorch only
provides a version that tak…
Best.
K. Frank