F1-Score appears nan value in evaluation phase


I implemented F1_Score by myself instead of using sklearn. But in the validation phase and testing phase, it appears nan value. I cannot fix the bug, could you please give me some advice? I will very appreciate it!

I’m working on a multi-class segmentation task.

def _get_stat(self, logits, target, nb_classes):
    with torch.no_grad():
        pred = torch.argmax(logits, dim=1).view(-1)
        target = target.view(-1)
        pixel_counter = torch.zeros(nb_classes).to(logits.device)
        acc = torch.zeros(nb_classes).to(logits.device)
        f1 = torch.zeros(nb_classes).to(logits.device)
        for k in range(0, nb_classes):
            # tp + fp
            pred_inds = pred == k
            # tp + tn
            target_inds = target == k
            # tn + fn
            non_pred_inds = pred != k
            # tn + fp
            non_target_inds = target != k
            # tp
            interection = pred_inds[target_inds].long().sum().float()
            # tn
            non_interection = non_pred_inds[non_target_inds].long().sum().float()
            # fn + fp
            denominator = non_pred_inds.long().sum().float() + non_target_inds.long().sum().float() - 2*non_interection
            pixel_counter[k] = target_inds.long().sum().float()
            acc[k] = interection
            f1[k] = ((2 * interection) / (2 * interection + denominator + 1e-10))
     return (acc[:nb_classes-1].sum() / (pixel_counter[:nb_classes-1].sum() + 1e-10)), f1

The nan value also appears in mean_f1_score, I calculate it by:

# the last class should be ignored
.mean_f1_score =f1_score[0:nb_classes-1].sum() / (nb_classes-1)

P.S. Why I don’t use the metrics function from sklearn is I think the transform from tensor to numpy(from GPU to CPU) is the time-wasting operation, and it will make GPU usage lower.
Correct me please if I am wrong.

Thanks in advance!