YOLO overfit problem(MAYBE)

I made my own code for YOLO.

It has made quite good detection and classification.

image

However, I train it more epochs and got little bit different result with my model after the end of the program.

The model can’t find any box in the photo.

For 200 test photos, it could find only 3. Fortunately, if the model finds the box, it is correct box.

So, I doubt whether it is overfitted or not.

If it is because of overfitting, how can I solve the problem?

Which YOLO version have you implemented? I’d advise to start out from pretrained weights to reduce overfitting. You can check if it overfits by comparing your train and val/test results. Does it work reeeeally good for train but not so good for val/test? That’s overfitting.

hey @Oli
i have created my own code for yolo v2 and training it for a face detection problem . I am facing the same problem , which is my training results are far good than my testing results.
any thoughts on how to prevent my model from overfitting?
thanks

Hi Yoge,

Luckily there are tonnes of options to prevent overfitting :slight_smile: The easiest way is to start from pretrained weights (on COCO most commonly). If you need to go further than that, look into getting more data online - Open Images has the face class. How are you benchmarking your model?

@Oli
thanks for the reply.,
well i wanted to train the network from scratch to address the kind of problems one can run into while training such networks , and hence dint use pretrained networks. Also make you aware of my training dataset , i am using wider face dataset to train my network where i am using 5000 instances of images for training , but it only has 4 instances which does not contains any face in it. DO you think this is crux of the overfitting problem ?
And by benchmarking do you mean by what is my accuracy metrics?
thanks

Hi,

What kind of problems are you thinking of?

5000 images are quite low - especially without pre-training.

I’d say yes, but it depend on your model behaviour. Does it predict faces everywhere (false positive) or struggle to find the real faces?

Yup :slight_smile:

hey there @Oli

well problems like underfitting , overfitting , gradient exploding , loss not decreasing e.t.c. i have encountered gradient exploding , loss not decreasing problem when i started training. though founded a workaround for these by doing some changes in the loss function . Now i am facing overfitting.

ohhh , when i was going through the various online sources for the “amount of instances required to train yolov2” , they mentioned around 2000 for each class and since this a just a one class problem (precisely two class face and non face) , i thought 5000 was enough , but i guess they mentioned it with pretrained weights which i clearly failed to take under consideration. what number do u suggest for a problem like this.?

some times it predicts faces other than original too , but i have this big problem of not getting the object detection score as high as desired for the predicted and non predicted bb’s. i mean the class confidence score is 1 most of the times , but object detection score is not as high as desired (around 0.2-0.4) for some and around 0.7 also for some(but 0.7 is rare) and get combined confidecne score i am taking the product of class confidence and object detection score which turns out to be very less. i believe because of this low object detection score i am not able to get all the bounding boxes in the image. Can you please let me know what do u think about this problem i am facing.

for accuracy metrics currently i am doing it manually , i.e running batches of training and validation images after training and looking at the results. After testing the batch on some 1000 images from training and validation batch , training results are far better than validation one. And currently i am working on a script to get MAp and recall for the validation image testing.

i know i have asked a lot of questions here ,and i am truly thankful to u for ur replies , it helps a lot.
thanks

Hi,

On the account of pretraining - I’d say that the problems that you mentioned happens no matter if you’re starting from scratch or pretrained weights. It’s strictly better to start from pretrained weights so again I’d urge you to try to load some previous weights if you’re looking to combat overfitting. You can then freeze the backbone and only train the last few layers.

How many images are needed - More = better :sweat_smile: If you’re overfitting a lot it would help a lot to bring in more data. There is no number anyone can give you - but depends on what result you’re looking for. But generally, start with little data -> get more data if you’re not happy with the result.

Object score - You only have one class right? That means that your class prediction can only have one output and is basically useless. I’d skip thinking about the class prediction at all and just focus on the object score. The objectivness score will be low, and thats OK. You can set the threshold wherever you want to :slight_smile:

Look at this github if you want to get some aid in implementing mAP (it just does AP but that’s often enough)

thanks for your reply ,
I will surely try to train with pretrained weights as well , and compare the results.

yesterday started training with 10000 images and yes the overfitting is very less now, also i am getting good prediction whenever i am getting them . its still skipping faces here and there and also the object score is less but i guess its just the matter of more data and maintaining your learning rate. I would also like to experiment with optimzers, i am using tensorflow and low level api calls , so instead of using adam (current optimizer) , i will train with SGD also with varying LR’s and comapre the results.

As far as Object score is considered , yes it is still low and does not seem to increase and this is one area where i am not getting any ideas of how to increase it, and class scores are always very close to 1 so neglecting them doesnt make any difference, tho it makes sense to treat it as one class problem which i started with earlier stages , but then thought of using the same model with different domains of problems and not just one class problem so converted it back to multi class model.
If any thing comes in your mind to increase the object score please lemme know.

I will surely look into that repo for mAP .

thanks :slight_smile:

Great :slight_smile: I’m glad things are improving for you.

About the object score - In the official Yolov3 repo, the author used an object score threshold of ~0.25 to decide if the bounding box should be considered. YOLO in nature are biased in creating low object scores thanks to 99% of predicted boxes aren’t matched with a ground truth box. So I’d advise you to not try to increase the object score but rather find the sweet-spot for the threshold. Maybe I’m missing something in why you’d like to increase the object score so feel free to explain :slight_smile:

Where as i am using a threshold of .30 for filtering the predictions

Okay , will take that into consideration .

the reason is very intuitive , after training yolo for around 10000 epochs , if i get a object score of 0.3 or 0.4 , i felt something is wrong with the training as i was expecting the score to be much more than this because of the loss being so very less (close to 0.05). Please feel free to correct me or add something.

thanks :slight_smile:

Aha, I understand. If this is true for the training data I completely agree. If it’s for the validation/test data I’d say that those numbers are normal

well actually that data i provided above was for validation dataset , for training set , the object score is close to .8 or .7 and some around .4 and .5 …
if its normal then i spent around a week thinking of how to increase the object score without knowing the normal scenrio :sweat_smile:

thanks for your time @oli. will get back to you in case of some other complications. :slight_smile:

1 Like

Hello,
I am trying to use Yolov4 for Traffic lIght detection, I am using around 5000 images for training.
I am getting avg loss of around 0.23, it was continuous decrease but mAP I am getting is between 57% - 62%.
mAP is not increasing above this value.

At 2000 iterations I got mAP of 62% and loss around 0.6. Further training to 8000 iterations loss decreased to 0.23 but mAP is still struck between 57% - 62%.

Could anyone please suggest if it is the problem of Overfitting?

And how could I tackle it to get more mAP.

Thanks in advance…