Ignore_index _ also for SmoothL1Loss()?

Is there a way to use something like the ignore_index=-Function from the torch.nn.CrossEntropyLoss also for the SmoothL1Loss()?

I have two models (classifier and regression). For both I want to exclude a certain class/category from the loss_calculation. (For classifier I can do this in the CrossEntropyLoss).

– Does somebody see any way to do this?

No, since floating point numbers are expected for SmootkL1Loss as the target and ignoring one specific floating point value sounds like a bad idea.
If you want to ignore specific values, you could use an unreduced loss (use reduction="none"), mask the loss values corresponding to your ignored values/ranges, and reduce the loss afterwards.

okay, great – thanks.

How exactly does this masking work?
I have a target_class(int) β†’ CrossEntropyLoss. and a target_distance (float) β†’ SmoothL1Loss.
Both belong together to one single object.

So if the target_class is (let’s say) 8: I don’t want to include it into the CrossEntropyLoss (Classification) neither into the SmoothL1Loss (regression).

Any idea how to make it easy, or where I could look for a solution?

Something like this should work:

criterion = nn.SmoothL1Loss(reduction="none")

x = torch.randn(10, 1, requires_grad=True)
target = torch.randn(10, 1)
target[0, 0] = 8.
print(target)
# tensor([[ 8.0000],
#         [ 1.5230],
#         [-0.6978],
#         [ 0.6073],
#         [ 0.3451],
#         [-2.7498],
#         [ 0.5196],
#         [-0.5696],
#         [-0.4566],
#         [-0.5452]])

loss = criterion(x, target)
print(loss)
# tensor([[8.1776e+00],
#         [2.4468e-02],
#         [2.4312e-02],
#         [1.9052e-01],
#         [7.3119e-03],
#         [2.6121e+00],
#         [1.3799e+00],
#         [1.2772e+00],
#         [2.9809e-01],
#         [1.2309e-01]], grad_fn=<SmoothL1LossBackward0>)

mask = (~(target == 8.)).float()
print(mask)
# tensor([[0.],
#         [1.],
#         [1.],
#         [1.],
#         [1.],
#         [1.],
#         [1.],
#         [1.],
#         [1.],
#         [1.]])

loss = loss * mask
print(loss)
# tensor([[0.0000],
#         [0.0245],
#         [0.0243],
#         [0.1905],
#         [0.0073],
#         [2.6121],
#         [1.3799],
#         [1.2772],
#         [0.2981],
#         [0.1231]], grad_fn=<MulBackward0>)

loss.mean().backward()
print(x.grad)
# tensor([[-0.0000],
#         [-0.0221],
#         [ 0.0221],
#         [-0.0617],
#         [ 0.0121],
#         [ 0.1000],
#         [-0.1000],
#         [ 0.1000],
#         [ 0.0772],
#         [-0.0496]])
1 Like

Looks good, thanks.

Is there a possibility to get the length of your loss-Tensor without counting the zeros?

The length of a loss can be checked via len(loss) or loss.size(0). In case you are interested in checking the non-zero entries, you could use mask.sum().

1 Like