Is there an equivalent of Tensorflow Serving in Pytorch?

is there an equivalent of Tensorflow Serving in Pytorch? More specifically, automated inference server that handles batching requests to maximize performance, switching models, running experimental models and recording performance…

tensorflow serving: https://www.tensorflow.org/serving/

Hi,

No there is not such thing at the moment but contibutions are welcomed :slight_smile:

okay, thanks :slight_smile: