# The value of weights for weighted cross entropy should be normalized?

I am dealing with multi-class segmentation. I have four classes, including background class. Since the majority pixel belong to background class, the loss goes down, but the dice score is really low. In order to rectify it, I am using weights for cross-entropy loss. I am calculating the global weights from the whole dataset as follows:

``````count = [0] * self.n_classes
for lbl in range(len(labels)):

unique_values, counts = np.unique(labels[lbl], return_counts=True)
for value, count_value in zip(unique_values, counts):
count[int(value)] += count_value

weight_per_class = [0.] * self.n_classes
N = float(sum(count))

for i in range(self.n_classes):
weight_per_class[i] = N/float(count[i])
print("weights per class",   weight_per_class )
``````

`weights per class: tensor([ 1.0134, 242.3116, 216.2903, 223.6481])`

Now when I am not normalizing these weights and using using them directly in `CE`, loss comes out to be 1.7308.

`````` CE_loss =  torch.nn.CrossEntropyLoss(weight=class_weights.cuda(device))
In [54]: loss = CE_loss(preds, torch.argmax(var_gt, dim=1), weights = normalized_weights)
``````

However, when I normalize the weights using softmax, the loss come to be `1.5537`

``````normalized_weights = F.softmax(class_weights, dim =0)
normalized_weights
``````

tensor([0.0000e+00, 1.0000e+00, 5.0016e-12, 7.8443e-09])

``````CE_loss =  torch.nn.CrossEntropyLoss(weight=normalized_weights.cuda(device))
loss = CE_loss(preds, torch.argmax(var_gt, dim=1))
loss
``````

I have two questions: Only the weight value is important to deal the pixel imabalance for segmentation task, or we also need to normalize the weight?

Second, is it a common practice, for multi-class segmentation, that we calculate loss for all the classes, but for dice calculation and sacing best checkpoint we donot consider the background class, and focus on the other classes?

The weights are internally normalized by dividing them with the `sum` not via `F.softmax`:

``````weights = torch.tensor([ 1.0134, 242.3116, 216.2903, 223.6481])

criterion = nn.CrossEntropyLoss(weight=weights)
output = torch.randn(10, 4)
target = torch.randint(0, 4, (10,))
loss = criterion(output, target)
print(loss)
# tensor(2.0991)

criterion = nn.CrossEntropyLoss(weight=weights/weights.sum())
loss = criterion(output, target)
print(loss)
# tensor(2.0991)
``````

and you can thus just pass the unnormalized weights to the criterion.

2 Likes

Thank you @ptrblck . Can you please comment on this: is it a common practice, for multi-class segmentation, that we calculate loss for all the classes (inc. background), but for dice score calculation and saving best checkpoint we donot consider the background class, and focus on the other classes? As majority of pixels belong to the background, so including them in final dice, would have negative impact.

Or once we are including weights according to the inverse of class frequency, there is no harm in including abckground for dice score calcualtion

Iām not familiar enough with your use cases, but based on e.g. `torchmetrics.Dice` it seems to be their recommendation:

It is recommend set ignore_index to index of background class.

Thank @ptrblck . For calculating weight metric for Cross entopy (CE) loss , do we also need to calculate the weights for validation split too and pass it CE validation loss in model.eval(). Or we just calculate weights for training split in CE loss. If the second case is true, then how CE_loss knows that it doesnot need to include the weight param in CE loss for model.eval() phase of validation?