Hi, I wanted to try the example of the multiclass case for precision metric from the ignite library with Average=True. I used the exact same values and code but still got an error. I was interested to test out different cases and binary & multilabel seem to work, but the multiclass case is not working as it should work like in the docs.
import torch from ignite.engine import Engine from ignite.metrics import Precision def process_function(engine, data): return data, data default_evaluator = Engine(process_function) metric = Precision(average=True) metric.attach(default_evaluator, "precision") y_true = torch.Tensor([2, 0, 2, 1, 0, 1]).long() y_pred = torch.Tensor([ [0.0266, 0.1719, 0.3055], [0.6886, 0.3978, 0.8176], [0.9230, 0.0197, 0.8395], [0.1785, 0.2670, 0.6084], [0.8448, 0.7177, 0.7288], [0.7748, 0.9542, 0.8573], ]) state = default_evaluator.run([[y_pred, y_true]]) print(state.metrics["precision"])
This is the error I get:
In the docs link below, you can find the same code block for the multiclass case, and they get the precision as 0.6111…
I would be very grateful if someone could point out where I was going wrong, or if I need to do something else to test the multiclass case for precision.