Can the torchserve be regarded as a general MLServe platform?

From the document of torchserve, I think it can support any kind of ML architecture, not limited to only pytorch, am I right ?
So has any one tried to deploy, say a tensorflow model, on torchserve ? Any experience to share ?
Will be very glad if u can give me some suggestions!

Try use onnx and Triton Inference Server, for high performan serving. Also You could try torchpipe for pytorch interface and onnx/tensorrt backends

Indeed torchserve is a general framework it works by launching python processes and those processes can run anything

There’s first class support for ONNX serve/test/pytest/ at master · pytorch/serve · GitHub and Tensorflow can be supported easily