Using Torchserve to deploy Tacotron2 and WaveGlow

Hello together,

I wanted to ask if anyone has experience in the deployment of Tacotron2 and WaveGlow (https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/SpeechSynthesis/Tacotron2) via Torchserve and could provide some insights. I am slightly confused as both models are required for inference and in the documentation of Torchserve I did not notice anything similar. Moreover, I would like to run the inference step of the model on a CPU, is this possible?

It seems possible serve/waveglow_handler.py at master · pytorch/serve · GitHub