Which loss function to optimize the recall?

I train some pre trained models for a binary classification task. Therefore I want to optimize on the recall value. It seems like the nn.CrossEntopyLoss Function is the best one for the job, but is there another one or should I define my own loss function?? (I’m new to PyTorch)
some help would be great :slight_smile:

Hi Kilian!

Recall (in the context of making classification predictions for a given
dataset) is the percentage of positive samples that you correctly classify
as positive. Note, it doesn’t care whether you classify negative samples
correctly or not.

It is a useful performance metric, but, in isolation, problematic. You can
achieve perfect, 100% recall by having your model classify all samples
as positive – hardly useful.

(Precision – often used as a companion metric – is the percentage of
samples that you classified as positive that actually are positive. This
metric does care whether you incorrectly classify negative samples as
positive.)

For a binary classification task, I would recommend that you use
BCEWithLogitsLoss as your loss criterion. (It is a form of cross entropy,
but specifically tailored to the case of binary classification.)

You can bias your training to achieve improved recall – at the cost of
reduced precision – by using BCEWithLogitLoss’s pos_weight constructor
argument, setting it to a value greater than one.

Best.

K. Frank

thanks Frank! I got a good solution now