I am trying to build a object detection model which detects flies on cow. I have done a good job detecting flies using yolo frameworks but this only happens when the quality of the image is top notch as well as the position of the cow is closer to the screen. i.e If the image is taken from a short distance. So, I planned to reject the bad images ( quality as well as position ) using a classification model. I build a 5 class classification model -
bad class - having blurry images and cow which are far away.
difficult - some what decent images as compared to bad
good - good quality and short distance cows.
no_cow - images in which cows are absent. ( Basically the dataset consists of green pasture images without any cow in them )
not_cow - Animals other than cow.
I trained the model in pytorch and the .pt is doing a good job in predicting the classes with a validation accuracy of 77%. But when I convert the .pt file to .ptl and run inference on mobile using same set of images ( I am using react-native-pytorch-core library ) I get a validation accuracy of 62%. But the .ptl model 99% of the time is only predicting among the good,no_cow and not_cow images whereas the .pt model is somewhat equally distributed among all the classes. What may me the reason?