How to tune recall, precision for neural network

Hi all,
I am using a 3 dense layer architecture for a multi-classification problem. I am able to calculate the metrics(recall and precision), but my use case requires one of the metrics to be higher. I wanted to know how can we tune an MLP classifier for such use case?

It’s hard to suggest ways without knowing how your data looks like. A few things which have worked for me in the past:

  1. Ensure that train, valid, and test data come from the same distribution.
  2. The model is trained correctly (ie no underfitting and no overfitting).
  3. Improve precision: Analyse the confusion matrix. Look at the examples which are misclassified (FPs) and mine data with data similar to misclassified examples (FP mining). Retrain model.
  4. Improve recall: Again look at the confusion matrix to look for FNs. It maybe the case that there are very few examples from one of the classes hence the model is not learning to classify that class. Add more data (or oversample) and train again.

In most cases there’s a tradeoff between precision and recall.

1 Like