I am trying to make a face recognition model with PyTorch. My model performs well with both loss scores for training and validation close to 0. The problem is when I tested it using same input, it gives me different results like below:
The left side is the true label and the right one is the prediction output. I set random seed and ran
What is the problem? It seems it just doing fine. Could you explain what is your expectation and what is the difference of these values from your expectations?
As I see, it has only one error which is normal as you testing the model and not training and in any model that has not overfitted, test loss/ test accuracy is higher/lower than train mode.
So, what I understand is that if you pass input x to model in case of model.train() everything is ok but for the same input x, model.eval() is giving worse result. model.eval() only affect layers such as batchnorm and dropout, so I guess you have such layers in your model definition. I think this post in that case can help alot as this is a tricky challenge sometimes:
There are few tricks to solve issue in the aforementioned thread. I hope it helps.
I am hoping somebody can help me because I am banging by head against a brick wall
I am using the RetinaNet notebook (link posted in my post above), after running different experiments my results were getting worse and no where near what I got the first time round so I decided to do a check.
I looked at the model where I got good reults, an mAP of 0.36
So I took tha same model, same notebook, same data, changed NOTHING and my results are terrible mAP 0.15. Has anyone come across this before ? I haven’t changed a thing but I know that something must be different and I can’t seem to find out what it is!.