Increasing scores instead of decreasing loss in training

I’m trying to solve a binary classification problem, and wondering whether it’s possible to improve/train a model by increasing scores like F1 instead of decreasing BCEWithLogits loss. I know decreasing loss naturally leads to the scores get increased, but I want to try to train model by explicitly increasing F1 scores.


We usually do minimization so you can try to minimize -F1 to get what you want.

1 Like

I, myself, found a good explanation and implementation of using F1 as training loss as below, it didn’t work for my dataset though.