My dataloader returns image, target and a weight map for my loss function. For this, I’ve written an update function for the trainer engine (because I have to pass that weight map to the loss function). But, during evaluation, I don’t know how (or if) it’s possible to include this weight map and attach loss as a metric to the evaluator. prepare_batch expects an input-target pair (I cant’ return more than 2 items) and I don’t think I can pass the weight map through output_transform…
I need to unpack the batch, that has 3 items, and follow the same scheme for evaluation. If I could unpack the batch inside loss_fn (in metrics.Loss), or using prepare_batch in create_supervised_evaluator that would do the trick.
It works! Thanks!
Now, could you explain why?!
Why does the eval_fn return a dictionary? (kwargs for some function?)
According to the docs: process_function (callable): A function receiving a handle to the engine and the current batch in each iteration, and returns data to be stored in the engine’s state.
How does Loss receive the weight_map?
And finally, for other metrics that only need outputs and target, do I have to run another evaluator again to log them?
To be able to pass additional attribute to Loss we need to set the output in format (prediction, target, kwargs) as according to the docs:
output_transform (callable): a callable that is used to transform the
:class:~ignite.engine.Engine’s process_function’s output into the
form expected by the metric.
This can be useful if, for example, you have a multi-output model and
you want to compute the metric with respect to one of the outputs.
The output is expected to be a tuple (prediction, target) or
(prediction, target, kwargs) where kwargs is a dictionary of extra
keywords arguments. If extra keywords arguments are provided they are passed to loss_fn.