Deploying models using torchserve workflow on aws


I have multiple pytorch models which form a DAG. Torchserve has workflows to deploy such DAG inference workflow on local, but I am not sure how to use it (torchserve workflows) to deploy them on aws.

Any thoughts on how this can be done is really appreciated

Thank you!

We have 2 examples you can checkout here

What specifically about deploying to AWS are you asking about?

I have already checked out the examples in the above mentioned repo. They seem to be serving the workflow on the local machine. I am wondering how to serve the workflow as an aws endpoint.

For example, I know how to deploy a single torch model using torchserve on aws as mentioned in this (Deploying PyTorch models for inference at scale using TorchServe | AWS Machine Learning Blog). I am wondering if something like this can be done with torchserve workflow (when we have multi-models in a dag). I searched over internet to see if anyone have deployed uisng torch workflow on aws, but no luck. Thank you!