Multiple models torchserve

What is the best way to do inference with multiple models inside torchserve?

TL;TR > i have to predict a value with a model1, and predict another value at model2, which its inputs is model1’s output

It is my first time using torchserve, so I don’t know if I should load two models at same time, and use both inside Inference function…

What is the best practice?