Exporting pth file to onnx with torch

Hi,

I’m trying to export a .pth model created by detectron2 into a onnx model. After some research I ended up with the following code. In fact it exports an onnx model, but the outputs are weird. When I trained, I put a maximum of 100 detections, which are coincident with the outputs (boxes, scores and classes).
I noticed that the output of demo.py (with pth file) of detectron gives better results with scores above 90% but when I try to inference with the onnx model it gives me hardly 50% with all the 100 boxes having a similar value, even if they are placed in different places.

The code below works/exports, but I’m concerned that some parameters are being discarded and the onnx file is not the ideal one.

Is there anything wrong with this code?

`cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file(‘COCO-Detection/faster_rcnn_R_101_FPN_3x.yaml’))

cfg.OUTPUT_DIR=“onnx_export”
cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, “model_final.pth”)
cfg.MODEL.ROI_HEADS.NUM_CLASSES=2

model=build_model(cfg)
model.eval()

checkpoint=torch.load(“onnx_export/model_final.pth”)
model.load_state_dict(checkpoint[“model”])
in_features = model.roi_heads.box_predictor.cls_score.in_features

box2box_transform = Box2BoxTransform(
weights=(10.0, 10.0, 5.0, 5.0), # Standard weights for bounding box regression
scale_clamp=math.log(1000.0) # Standard clamp value to limit scaling
)
model.roi_heads.box_predictor = FastRCNNOutputLayers(model.roi_heads.box_head.output_shape, num_classes=2, box2box_transform=box2box_transform)

cpu_device = torch.device(‘cpu’)
x = torch.randn(3, 768, 1344)
model.to(cpu_device)

inputs = [{“image”: x}]
print(inputs)
model.to(cpu_device)
traceable_model = TracingAdapter(model, inputs)

torch.onnx.export(
traceable_model, (inputs[0][“image”],),
“model.onnx”,
opset_version=17,
do_constant_folding=True,
verbose=True,
input_names=[“input”],
output_names=[“output”]
)`