Converting custom Detectron2 model with torchserve?

Hi everyone,

I have trained a model in Detectron2 using the cascade_mask_rcnn_R_50_FPN_1x.yaml config file. From my understanding, the model_final.pth file saved after training is a state_dict? I want to use torchserve to serve this model as a REST endpoint.

The first step to that is running torch-model-archiver which creates a .mar file from the saved model file (model_final.pth). In eager mode, this also needs a model.py file as an argument which contains the model architecture. I am not sure what to pass here since it was trained using Detectron2 config file and no explicit model.py file with the defined architecture.

Another alternative is to pass a torchscript version of the model instead of the model.py file. I am not sure how to do this for the Detectron2 model (model_final.pth). Can I just call torch.jit.script() after loading the model state_dict?

If anyone has any pointers or has been able to use torchserve with a Detectron2 trained model please let me know! Thank you!

Bump - does anybody know the answer to this?

1 Like

torch-model-archiver --model-name xyz --version 1.0 --serialized-file model.pth --handler handler.py

This is how you can convert a detectron model into a mar file.

NOTE: In your handler.py load config with model_zoo