How to serve 2 model on torchserve?

Can we Infere 2 model at the same time on a GPU with Torchserve? For example, I want to Infere a model 1 and 2 on the same system at the same time. How can I do that?

Here is a workflow for multiple models on TorchServe.