Suppression of false detections in one-class object detection

I’m performing object detection class with only one class.
Trained network show good results by detection the class objects but sometimes it identifies false areas as objects with a fairly high percentage of confidence.
Is there are any methods by suppressing such detections?

If you see these in training, too, you could make a point of oversampling them in the next epoch (or an intermediate mini-epoch). More elaborate schemes of training instances where the model gives undesired results go under the name of “hard example mining”.

Another option could be to add a post-hoc classification step where you train a classifier to have a second opinion about whether the detected image shows anything.

Best regards


Is that true that when we skip some true positives when label an image it affects in a model’s accuracy decreasing (discussed in here)?
If so doesn’t it mean that for suppression false positives we can just add empty images not containing true positives?

No, you should label all things correctly, but you can show the network images where nothing of interest is to be seen (and where it mistakenly detects something) more often.

1 Like