How access inputs in custom Ignite Metric?

I have implemented a custom Ignite Metric based on this tutorial.

def update(self, output):
        y_pred, y = output

How to access inputs (x values) in the update function?

The simplest way to add x is to pass it into the output as (y_pred, y, x).

Another way (more clearly implemented) is to reimplement iteration_completed() method : https://github.com/pytorch/ignite/blob/6faa6ac1e3a46c79e0dfcfd976439b86329717b0/ignite/metrics/metric.py#L198

and pass to update method everything it needs without “hiding” things: output, input, etc…

Let me know if it answers your question. Thanks

how to pass x into the output? And not to break other metrics.

evaluator = create_supervised_evaluator(
        model, metrics={
            "loss": Loss(criterion),
            "accuracy": Accuracy(), 
            "accuracy_pix2pix": PixelToPixelAccuracy()}, # custom metric to extend
    )

Pass all necessary data y, predictions, x to all metrics and use output_transform to filter out required args to existing metrics.

Probably, more elegant way to do this is :

  • return output as a dictionary with keys like “y” - target, “y_pred” - predictions, “x” for input x.
  • override _required_output_keys for your custom metric as a tuple ("y", "y_pred", "x")

Thus, I think it can be possible to fetch needed args without using output transform…

Can you please provide short code example?

Here is a colab with an example : https://colab.research.google.com/drive/1-EL_YGLPzEPIw_6jU-tWRRLXnx0lN106?usp=sharing

import torch
import torch.nn as nn

from ignite.metrics import Metric, Accuracy
from ignite.engine import create_supervised_evaluator


class CustomMetric(Metric):

    _required_output_keys = ("y_pred", "y", "x")

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
    
    def update(self, output):
        print("CustomMetric: output=")
        for i, o in enumerate(output):
            print(i, o.shape)

    def reset(self):
        pass

    def compute(self):
        return 0.0



model = nn.Linear(10, 3)

metrics = {
    "Accuracy": Accuracy(),
    "CustomMetric": CustomMetric()
}

evaluator = create_supervised_evaluator(
    model, 
    metrics=metrics, 
    output_transform=lambda x, y, y_pred: {"x": x, "y": y, "y_pred": y_pred}
)

data = [
    (torch.rand(4, 10), torch.randint(0, 3, size=(4, ))),
    (torch.rand(4, 10), torch.randint(0, 3, size=(4, ))),
    (torch.rand(4, 10), torch.randint(0, 3, size=(4, )))
]
res = evaluator.run(data)

Output:

CustomMetric: output=
0 torch.Size([4, 3])
1 torch.Size([4])
2 torch.Size([4, 10])
CustomMetric: output=
0 torch.Size([4, 3])
1 torch.Size([4])
2 torch.Size([4, 10])
CustomMetric: output=
0 torch.Size([4, 3])
1 torch.Size([4])
2 torch.Size([4, 10])
1 Like

@odats since v0.4.2 which will be released Sep 22-25th, please consider using required_output_keys as a public class attribute instead of a private one _required_output_keys.

1 Like