Using Torchserve to deploy Tacotron2 and WaveGlow

Hello together,

I wanted to ask if anyone has experience in the deployment of Tacotron2 and WaveGlow ( via Torchserve and could provide some insights. I am slightly confused as both models are required for inference and in the documentation of Torchserve I did not notice anything similar. Moreover, I would like to run the inference step of the model on a CPU, is this possible?

It seems possible serve/ at master · pytorch/serve · GitHub