I am solving the problem of binary segmentation of medical images and plan to use Dice as a quality metric. The problem is this: different articles use two different formulas to calculate this ratio.

This one:

```
class DiceMetric1(nn.Module):
def __init__(self, weight=None, size_average=True):
super(DiceMetric1, self).__init__()
def forward(self, inputs, targets, smooth=1e-6):
inputs = inputs.view(-1)
targets = targets.view(-1)
intersection = (inputs * targets).sum()
dice = (2.*intersection + smooth)/(torch.pow(inputs, 2).sum() + torch.pow(targets,2).sum() + smooth)
return dice
```

Or like this:

```
class DiceMetric2(nn.Module):
def __init__(self, weight=None, size_average=True):
super(DiceMetric2, self).__init__()
def forward(self, inputs, targets, smooth=1e-6):
inputs = inputs.view(-1)
targets = targets.view(-1)
intersection = (inputs * targets).sum()
dice = (2.*intersection + smooth)/(inputs.sum() + targets.sum() + smooth)
return dice
```

Experiments have shown that the value calculated by the first formula is usually greater than by the second. Which one should be used?

The second question is that in my dataset there are images in the mask of which there are no positive values at all, but only the background (in the mask it corresponds to the value 0). For such images, the metric value is displayed very small, even if the probabilities obtained by the network are close to zero, which does not look very logical, because the network correctly understood that the background is in the picture. But this problem is solved by setting the value smooth = 1, which is the default value I saw in half of the implementations of Dice (in the other half, smooth = 1e-6). Which smooth should you use?

I’ve tried every possible combination, but I don’t know which one to choose.