I am a person unfamiliar with the pytorch and caffe2 frameworks. I want to use pytorch serving like tf-serving. After some searching, I found that many articles mentioned that the pytorch model was converted to the caffe2 model by the onnx tool provided by facebook. I tried to convert the pytorch model to the caffe2 model with onnx. But I don’t know how to start the service. I have read the official documentation of caffe2 and pytorch, and google has a lot of related content, but I didn’t get the content I want to know from it, such as how to start the end-to-end caffe2 serving. Does anyone know how to start the caffe2 serving like tf-serving?
Here is a good explanation how to serve the Caffe2 model on AWS Lambda.
If that’s not what you are looking for, could you explain your current use case a bit more?
Do you want to deploy your model locally or in the web/cloud?