# Ignite metrics (recall, precision, etc) - thresholded_output_transform

I’m working on a binary task and I’m figuring out Recall, Precision, etc, as implemented by Ignite. As per documentation:

“In binary and multilabel cases, the elements of `y` and `y_pred` should have 0 or 1 values. Thresholding of predictions can be done as below:”

``````def thresholded_output_transform(output):
y_pred, y = output
y_pred = torch.round(y_pred)
return y_pred, y
``````

Am I correct to assume that `output` is what is returned by the model?
I’m using BCEWithLogitsLoss, so I have to pass logits through a sigmoid function. I have to use a sigmoid here as well, right?

``````def thresholded_output_transform(output):
y_pred, y = output
y_pred = torch.sigmoid(y_pred)
y_pred = torch.clamp(y_pred, 0., 1.)
y_pred = torch.round(y_pred)
return y_pred, y
``````

Output is what is returned by evaluator’s `process_function` : https://pytorch.org/ignite/metrics.html

I’m using BCEWithLogitsLoss, so I have to pass logits through a sigmoid function. I have to use a sigmoid here as well, right?

Well, it depends on your choice how to define whether logit’s value is sufficient to be interpreted as 1.
Yes, it is more convenient to pass to “probabilties” using sigmoid and then apply a thresholding (0.5) as you put to your version of `thresholded_output_transform`.

Here is another way to apply the thresholding:

``````t = 0.55

def thresholded_output_transform(output):
y_pred, y = output
y_pred = torch.sigmoid(y_pred)
y_pred = (y_pred > t).float()
return y_pred, y
``````
1 Like

@vfdev-5 nice! Thanks!

1 Like