Deploying onnx model with TorchServe

There’s an easy way to do this by just loading the model from a torchserve handler which are quite general in what you can use. There’s probably a better way to do this but at a high level solution would look something like

    def load_model(self, model_path):
        options = ort.SessionOptions()
        return ort.InferenceSession(model_path, options)
2 Likes