Multiple Inference at the same time with Torchserve

Can we Infere 2 model at the same time on a GPU with Torchserve? For example, I want to Infere a model 1 and 2 on workflow 1 with the same system at the same time. How can I do that?