Hello there, I’m New , Why I use the code to classify ants and bees to classify people, the accuracy rate is only 70%?
It doesn’t matter what code you are using as long as the data used for training is for people classification.
Also can you post in detail about the problem, dataset being used and the training code?
Hi and thank you so much for answering so quickly!
The data set I used was photos of my two family members. Each person had 100 training samples, 40 validation samples, and 30 test samples.
The current training situation is as follows:
-Training accuracy can reach: 90% +
-Highest verification accuracy: 67%
-Highest test accuracy: 74%
I use the code provided below and then use the inception model and feature extraction
The training hyperparameters are as follows:
SGD: lr = 0.001, momentum = 0.9
LR Decay: update step = 7, gamma = 0.1
My problem is, what aspects can be optimized to improve the generalization ability of the model, and I hope the test accuracy can reach 90%
Do these images contain only cropped portions of face? or it contains the entire person with body and other background?
Ways to improve performance:
- More data
- If you see two faces are not much distinct except for few fine grained facial aspects, you can try using a deep network(especially densenets), attention networks or handcrafted features from both faces.
Okay, thank you very much for your wonderful answer, very useful !!!
Yes, the sample contains the entire body and other backgrounds, because I think that ants and bees samples also include other backgrounds, the principle should be the same
According to your suggestions, I’ll first try the densenets model and optimize the samples,thank you again!
Sure. I would recommend you using only faces in your dataset since it is the discriminative feature amongst the two classes.
Do let me know what worked and what didn’t
hi mailcorahul,I have tested the densenet and vgg models, but the accuracy rate has not improved significantly. However, you suggest including only facial features, and I suddenly realized that it should not be better to include other backgrounds? If this is the case, before I detect an image, I should extract the facial area in the image and then recognize it. However, this leads to another problem, that is, how do I extract areas that may have faces in the picture. Does Pytorch have a solution to do this?
You can use face detection to detect faces in an image and then do a classification on top of it.