I am dealing with multi-class segmentation. I have four classes, including background class. Since the majority pixel belong to background class, the loss goes down, but the dice score is really low. In order to rectify it, I am using weights for cross-entropy loss. I am calculating the global weights from the whole dataset as follows:

```
count = [0] * self.n_classes
for lbl in range(len(labels)):
unique_values, counts = np.unique(labels[lbl], return_counts=True)
for value, count_value in zip(unique_values, counts):
count[int(value)] += count_value
weight_per_class = [0.] * self.n_classes
N = float(sum(count))
for i in range(self.n_classes):
weight_per_class[i] = N/float(count[i])
print("weights per class", weight_per_class )
```

`weights per class: tensor([ 1.0134, 242.3116, 216.2903, 223.6481])`

Now when I am not normalizing these weights and using using them directly in `CE`

, loss comes out to be 1.7308.

```
CE_loss = torch.nn.CrossEntropyLoss(weight=class_weights.cuda(device))
In [54]: loss = CE_loss(preds, torch.argmax(var_gt, dim=1), weights = normalized_weights)
```

However, when I normalize the weights using softmax, the loss come to be `1.5537`

```
normalized_weights = F.softmax(class_weights, dim =0)
normalized_weights
```

tensor([0.0000e+00, 1.0000e+00, 5.0016e-12, 7.8443e-09])

```
CE_loss = torch.nn.CrossEntropyLoss(weight=normalized_weights.cuda(device))
loss = CE_loss(preds, torch.argmax(var_gt, dim=1))
loss
tensor(1.5537, device='cuda:0', grad_fn=<NllLoss2DBackward0>)
```

I have two questions: Only the weight value is important to deal the pixel imabalance for segmentation task, or we also need to normalize the weight?

Second, is it a common practice, for multi-class segmentation, that we calculate loss for all the classes, but for dice calculation and sacing best checkpoint we donot consider the background class, and focus on the other classes?