Backpropagate several error values

Hi
I designed a classification model that I want to train on ‘noisy’ ground truth values. This means that for a given forward propagation, leading to say 10 output values, I’d like to create a ground truth tensor which is not one hot encoded, but made of one most important probability and 9 others that are not equal to zero.

This is quite similar to label smoothing but the values are random instead of being all equal to 0.1
I don’t understand the label smoothing examples that I found (on the net or on the forum), so can anyone help me do that ?

Thanks for your help

Hi Fabrice!

Let me assume that you are using CrossEntropyLoss as your loss criterion.

CrossEntropyLoss takes two kinds of ground-truth targets. One consists
of integer class labels, with no class dimension. The second consist of
floating-point probabilities for each of the classes and has a class dimension.

It looks like you want the latter. It would have a larger value, closer to one,
for the “correct” class and smaller values, closer to zero, for the “incorrect”
classes. Those values could be chosen randomly or could be based on
something in your training data that told you that one of the “incorrect”
classes had a probability of, say, 0.15, while another had a probability of,
say, 0.05.

Best.

K. Frank

Thanks Frank for your interesting answer.

Should I do something like this:

loss = nn.CrossEntropyLoss()
pred = model(input)
target = torch.zero_like(pred)
# here I change the target according to what I indicated
output = loss(pred, target)
output.backward()

Hi Fabrice!

Yes (assuming that what you “indicated” makes sense for your use case).

Best.

K. Frank

Thanks again, I’ll try this very soon!

It works fine, thanks again for your help!