Implementation of the multiclass case for precision metric

Hi, I wanted to try the example of the multiclass case for precision metric from the ignite library with Average=True. I used the exact same values and code but still got an error. I was interested to test out different cases and binary & multilabel seem to work, but the multiclass case is not working as it should work like in the docs.

import torch
from ignite.engine import Engine
from ignite.metrics import Precision
def process_function(engine, data):
  return data[0][0], data[0][1]
default_evaluator = Engine(process_function)

metric = Precision(average=True)
metric.attach(default_evaluator, "precision")
y_true = torch.Tensor([2, 0, 2, 1, 0, 1]).long()
y_pred = torch.Tensor([
    [0.0266, 0.1719, 0.3055],
    [0.6886, 0.3978, 0.8176],
    [0.9230, 0.0197, 0.8395],
    [0.1785, 0.2670, 0.6084],
    [0.8448, 0.7177, 0.7288],
    [0.7748, 0.9542, 0.8573],
])
state = default_evaluator.run([[y_pred, y_true]])
print(state.metrics["precision"])

This is the error I get:

In the docs link below, you can find the same code block for the multiclass case, and they get the precision as 0.6111…

I would be very grateful if someone could point out where I was going wrong, or if I need to do something else to test the multiclass case for precision.

Thank you!

Thanks for pointing out potential issues with our documentation and doctests!
In your example, the definition of default_evaluator should be different. For our doctests it is defined here:
https://github.com/pytorch/ignite/blob/5c5837ac2528e9e9f1451dfbceacfc0deb86fe56/docs/source/conf.py#L349-L352

def eval_step(engine, batch):
    return batch

default_evaluator = Engine(eval_step)

Complete example is the following:

import torch
from ignite.metrics import Precision
from ignite.engine import Engine

def eval_step(engine, batch):
    return batch

default_evaluator = Engine(eval_step)

metric = Precision(average=True)
metric.attach(default_evaluator, "precision")
y_true = torch.Tensor([2, 0, 2, 1, 0, 1]).long()
y_pred = torch.Tensor([
[0.0266, 0.1719, 0.3055],
[0.6886, 0.3978, 0.8176],
[0.9230, 0.0197, 0.8395],
[0.1785, 0.2670, 0.6084],
[0.8448, 0.7177, 0.7288],
[0.7748, 0.9542, 0.8573],
])
state = default_evaluator.run([[y_pred, y_true]])
print(state.metrics["precision"])
> 0.6111111

Hope this helps

Hi, thanks for the question.

The error comes from the process_function function you used in the Engine. In this very simple example, the output of this function feeds directly into the metric as a demonstration. Note that data[0] is y_pred. It means that data[0][0] is [0.0266, 0.1719, 0.3055] and data[0][0] is [0.6886, 0.3978, 0.8176]. This can’t be correctly used in the metric.

Thank you so much for the reply!

Thank you! I understand why it was giving an error.