# Why am I getting a value greater than 1 as dice score

I have two segmented images, and I’ve computed the dice score using the formula below, however, I keep getting values greater than 1 (like 11.8, 12.8) as a dice score. is there a reason why? or is my approach for computing the dice score wrong?

``````
def dice(X, Y):
intersection = (X * Y).sum()
union = X.sum() + Y.sum()

return ((2. * intersection) / (union))

``````

Did you verify that `X` and `Y` are probabilities in the range `[0, 1`]? If not, this could explain the unexpected values.

Yes, X and Y are normalised between 0 and 1 when the data is loaded. I checked the min and Max of X and Y after training and I see that its 0 and 24 respectively. I think this is due to some operations that I added during the training process. is It wise to renormalise my image to range between 0 and 1 (after training) before passing it to the dice function to calculate the score?

It depends what these values now represent and which of tour operations applied during the training process change the range of these tensors.
E.g. if `X` represents raw logits (in which case any value in `[-Inf, +Inf]` would be valid) you could create probabilities using `sigmoid` or `softmax` (it would depend on the actual use case).
The target values should not be changed during training as these represent the ground truth values.

Alright, thanks let me check out the output from each operation