Hi all,
I have implemented a pytorch retinanet object detection model based on the yhenon github implementation - GitHub - yhenon/pytorch-retinanet: Pytorch implementation of RetinaNet object detection..
I tried to perform inference on a specific image (image A) that is also in the training dataset. I was expecting to get identical, if not similar, results (bounding box locations, confidence scores, etc) for the same image. However, it is not the case at all. I get drastically different inference results.
Is this behavior expected?