I have tried this code : GitHub - facebookresearch/detectron2: Detectron2 is FAIR's next-generation platform for object detection, segmentation and other visual recognition tasks. on maskrcnn and ResNet 101 and it detects well the trained images but not generalize well with new images. How to improve this please ?
noone have an answer please ?
nobody have an answer please ? for example I would be able to measure validation loss which is not displayed by default
Your question might be too general, as there is no silver bullet to make a model generalize. Your best bet would be to check other implementations and reuse their workflow (i.e. in particular the model architecture, data augmentation, optimization etc.).
Your question might be too general, as there is no silver bullet to make a model generalize.
My thoughts exactly… @Sylvain_Ard it may help for you to share a bit how the new data looks… just how different is it?
For me I take a very narrow starting point with object detection… and watch very carefully to see how the data changes (some call this data drift) … without knowing that it’s hard to say what you should do in the model… or in the data other than retrain your model(s)
Hello, thank you for being interested in my question, here is my dataset with the JSON masks in COCO format : Dropbox - trainval - Simplify your life the goal is to recognize leaflets in one-leaf images
for test with new data take a leaf image on Internet
the model is here for testing after 30000 iterations : Dropbox - model_final.pth - Simplify your life
so you have an idea ?