Custom Loss penalizing similar categegories more strong

Hi everyone,

I’m designing at the moment my network, which shall classify images relating how many people are in there. So my output dimension is 6 - 0 … 5 people. I’m observing now that the most errors are happening in cases where 4 people are on the image but the network classifies it as 3 or 5. As one idea I would like to define my loss (at the moment I’m using CrossEntropyLoss) in a manner that these cases are more weighted in compare to other cases.

How can you do this?

Thanks and best regards
Jonas

Hello,

If you look at the CrossEntropyLoss documentation, there is a parameter `weight` that you can specify to give more weight to certain classes. For example,

``````    weights = torch.tensor([0.1, 0.1, 0.1, 0.3, 0.3, 0.2], dtype=torch.float32)
criterion = nn.CrossEntropyLoss(weight=weights)
``````

which would give more weight to classes 3, 4, 5.

Hope this helps!

Hi,

Thanks for you fast reply! I’ve already seen this weight-option, but it doesn’t completely fit to my idea, because I’m more thinking about a “dynamic”-weighting, meaning if

• the target is 4, I would like to have a weighting of [0.1, 0.1, 0.1, 0.3, 0.3, 0.3]
• the target is 2, I would like to have a weighting of [0.1, 0.3, 0.3, 0.3, 0.1, 0.1]

Is something like this possible?

You could modify the weights of the criterion like this:

``````criterion.weight = torch.tensor([0.3, 0.3, 0.3, 0.1, 0.1, 0.1], dtype=torch.float32)
``````

So I guess you could have a `if` condition on the target value and change the `weight` attribute as you want. Is this what you want?

Yep, good hint thanks!

I tried to implement a first example but struggling now with the batch-size. I tried making an array for the weight variable like this

``````        w = torch.tensor(np.matlib.repmat([0.3, 0.3, 0.3, 0.1, 0.1, 0.1], args.train_batch_size,1),dtype=torch.float32)
``````

However the Weight-Vector can be only a 1D-Array, how would you face this problem?

If I understand correctly, you want to consider the neighbors of the target when computing the loss. `nn.CrossEntropyLoss` does not permit that, but I implemented a custom loss function that takes into account the 2 neighbors of the current target.

``````class CustomLoss(nn.Module):
def __init__(self, weights):
super(CustomLoss, self).__init__()
self.weights = weights

def forward(self, logits, targets):
loss = torch.tensor([0], dtype=torch.float32)
log_probabilities = F.log_softmax(logits)
for i in range(targets.shape[0]):
low = int(max(0, targets[i] - 1))
high = int(min(log_probabilities.shape[1], targets[i] + 2))
p = log_probabilities[i, low:high]
loss += -torch.sum(torch.mul(p, self.weights[(low - low):(high - low)]))
return loss

we = torch.tensor([0.2, 0.5, 0.2], dtype=torch.float32)
criterion1 = CustomLoss(we)
``````

So basically, the target has a weight of 0.5 while its two neighbors (if they both exists) have weight of 0.2.
Let me know if it helps!

Hi,

No worries, thanks for your effort and the code!
This helps me quite a lot, also to see how to implement such custom loss function So far it didn’t worked for my problem, I have to rethink some stuff but now I’ve a great possibility to move on