CocoEval Evaluating Dataset

Hi there,
I have been using DETR on my own dataset and it works very well. I get a good mAP and Recall on the validation set. My question is, how to I run cocoEval to give me the same or similar results to what it got during model training. For example the model achieved and mAP of 0.89 on the validation set. I then decided to see if I could produce the same results again. I ran the model in eval mode on the dataset and set a confidence threshold > 0.8 and saved the results in a json file. I then used cocoeval and gave the validation set json and my new resFile as inputs and the evaluation results gave me an mAP of 0.6, which isn’t right. How do I go about getting the same or similar results as to what the model achieved originally and consequently how do I adjust the confidence threshold and get the precision-recall curves for these different thresholds. Thank you.