Confusion matrix

Hello, I did FNN for 4 class classifications.
How is it possible to calculate confusion matrix?

1 Like

You could use the scikit-learn implementation. Just get your predictions and targets using .numpy().

Alternatively you could of course calculate the conf matrix manually, if you don’t want to install/use scikit-learn.

2 Likes

I applied scikit-learn implementation, but output from FNN tend to 0. As a result my confusion matrix looks weird.


for epoch in range (num_epochs):
    out = model(input_train).to(device)
    _, pred = out.max(1)
    total += target_train.size(0)
    correct += (pred == target_train).sum().item()
    print(input_train)
    print(pred)
    loss = loss_func(out,target_train)
    counter +=1
    print('loss train', "Epoch N", counter,loss.data[0])
    model.zero_grad()
    loss.backward()
    opt.step()
print('Accuracy of the network on train dataset: {} %'.format(100 * correct / total))

conf_matrix = metrics.confusion_matrix(pred, y)

[[530783 0 0 0]
[ 8097 0 0 0]
[ 20079 0 0 0]
[ 16682 0 0 0]]

Where can be an error?

It looks like your model does not learn anything useful.
Since your classes are imbalanced, you could try to use a weighted loss function or the WeightedRandomSampler.

But I have to find confusion matrix for multi class image segmentation problem of high resolution images i.e. 1024x2048.
Copying tensors from gpu to cpu i.e. numpy and then calculating confusion matrix is really time consuming.
I found this but it is only of binary classification, not sure how to scale it to multi class.

1 Like

@ptrblck I did use the scikit-learn implementation to calculate the confusion matrix. The snippet is like this.


    with torch.no_grad():
        for i, data in enumerate(test_loader, 0):
            # get the inputs
            t_image, mask = data
            t_image, mask = Variable(t_image.to(device)), Variable(mask.to(device))
        
            output = model(t_image)
            pred = torch.exp(output)
            conf_matrix = confusion_matrix(pred, mask)
            print(conf_matrix)

I am wondering why I am getting this error , what do you think? is it related to the shape of pred or input?

  File "C:\Users\Neda\Anaconda3\lib\site-packages\sklearn\metrics\classification.py", line 88, in _check_targets
    raise ValueError("{0} is not supported".format(y_type))
ValueError: unknown is not supported

Probably scikit doesn’t recognize the format of your inputs to confusion_matrix.
Could you print the shapes and some values of pred and mask?
I guess flattening both inputs should work.

2 Likes

@ptrblck the shape of inputs are t_image: (1, 1, 240, 320), pred: (1, 2, 240, 320), mask: (1, 240, 320).

To calculate the confusion matrix you need the class predictions. Currently it looks like pred contains the logits or probabilities for two classes.
Try to call torch.argmax(pred, 1) to get the predicted classes.
Here is a small example:

output = torch.randn(1, 2, 4, 4)
pred = torch.argmax(output, 1)
target = torch.empty(1, 4, 4, dtype=torch.long).random_(2)
confusion_matrix(pred.view(-1), target.view(-1))
2 Likes

@ptrblck Thanks a lot. yes it work :slight_smile:

1 Like

I have an idea but don’t know whether it works.

change pred and target into one_hot format on gpus

TP = pred * target
FP = pred * (1-target)
FN = (1-pred) * target
TN = (1-pred) * (1-target)

1 Like

How would you know when to do pred or (1 - pred), target or (1 - target)?

@ptrblck Hello, May I know how to create a confusion matrix for YOLOv3.
For a 3 class classification

I don’t know what kind of classification output YOLOv3 returns, but for a multi-class classification the linked sklearn.metrics.confusion_matrix should work, while for a multi-label classification you could use sklearn.metrics.multilabel_confusion_matrix.

def ap_per_class(tp, conf, pred_cls, target_cls):
“”" Compute the average precision, given the recall and precision curves.
Source: https://github.com/rafaelpadilla/Object-Detection-Metrics.
# Arguments
tp: True positives (nparray, nx1 or nx10).
conf: Objectness value from 0-1 (nparray).
pred_cls: Predicted object classes (nparray).
target_cls: True object classes (nparray).
# Returns
The average precision as computed in py-faster-rcnn.
“”"

# Sort by objectness
i = np.argsort(-conf)
tp, conf, pred_cls = tp[i], conf[i], pred_cls[i]


# Find unique classes
unique_classes = np.unique(target_cls)

# Create Precision-Recall curve and compute AP for each class
pr_score = 0.1  # score to evaluate P and R https://github.com/ultralytics/yolov3/issues/898
s = [unique_classes.shape[0], tp.shape[1]]  # number class, number iou thresholds (i.e. 10 for mAP0.5...0.95)
ap, p, r = np.zeros(s), np.zeros(s), np.zeros(s)

for ci, c in enumerate(unique_classes):
    
    fig, ax = plt.subplots(1, 1, figsize=(5, 5))
    i = pred_cls == c
    n_gt = (target_cls == c).sum()  # Number of ground truth objects
    n_p = i.sum()  # Number of predicted objects

    if n_p == 0 or n_gt == 0:
        continue
    else:
        # Accumulate FPs and TPs
        fpc = (1 - tp[i]).cumsum(0)
        tpc = tp[i].cumsum(0)
        # Recall
        recall = tpc / (n_gt + 1e-16)  # recall curve
        r[ci] = np.interp(-pr_score, -conf[i], recall[:, 0])  # r at pr_score, negative x, xp because xp decreases

        # Precision
        precision = tpc / (tpc + fpc)  # precision curve
        p[ci] = np.interp(-pr_score, -conf[i], precision[:, 0])  # p at pr_score

        # AP from recall-precision curve
        for j in range(tp.shape[1]):
            ap[ci, j] = compute_ap(recall[:, j], precision[:, j])

        # Plot

            ax.plot(recall, precision)
            ax.set_xlabel('Recall')
            ax.set_ylabel('Precision')
            ax.set_xlim(0, 1.01)
            ax.set_ylim(0, 1.01)
            fig.tight_layout()
        

# Compute F1 score (harmonic mean of precision and recall)
    fig.savefig(f'PR_curve_{c}.png', dpi=300)
f1 = 2 * p * r / (p + r + 1e-16)
print(ap)

return p, r, ap, f1, unique_classes.astype('int32')

Thank you for the response, but may I request to have a look at this snippet and can request you to say where the line of code multilabel_confusion_matrix fits in.
@ptrblck

1 Like

multilabel_confusion_matrix expects the multi-label target as well as the predicted classes as the inputs.
Based on your code snippet I guess target_cls and pred_cls would be the corresponding tensors.

Hello,bro.Does Mr.ptrblck’s reply work?I have problem about Confusion-Matrix too.And I’m going to calculate yolo’s accuracy.Waiting for your reply.<3

Hi everyone, I made a class for confusion matrix that supports cuda. I hope that it helps other people as well. The trick to make it work was to use torch.bincount.

import torch


class ConfusionMatrix:
    _device = 'cuda' if torch.cuda.is_available() else 'cpu'

    def __init__(self, n_classes: int = 10):
        self._matrix = torch.zeros(n_classes * n_classes).to(self._device)
        self._n = n_classes

    def cpu(self):
        self._matrix.cpu()

    def cuda(self):
        self._matrix.cuda()

    def to(self, device: str):
        self._matrix.to(device)

    def __add__(self, other):
        if isinstance(other, ConfusionMatrix):
            self._matrix.add_(other._matrix)
        elif isinstance(other, tuple):
            self.update(*other)
        else:
            raise NotImplemented
        return self

    def update(self, prediction: torch.tensor, label: torch.tensor):
        conf_data = prediction * self._n + label
        conf = conf_data.bincount(minlength=self._n * self._n)
        self._matrix.add_(conf)

    @property
    def value(self):
        return self._matrix.view(self._n, self._n).T


def main():
    label = torch.tensor([0, 0, 0, 1, 1, 1, 2, 2, 2])
    pred = torch.tensor([0, 1, 0, 0, 0, 1, 2, 2, 2])
    conf = ConfusionMatrix(3)
    conf += pred, label
    print(conf.value)

    conf2 = ConfusionMatrix(3)
    pred2 = torch.Tensor([0, 0, 0, 1, 1, 1, 2, 2, 2]).long()
    conf2.update(pred2, label)
    print(conf2.value)

    conf += conf2
    print(conf.value)


if __name__ == '__main__':
    main()