Is there an equivalent of Tensorflow Serving in Pytorch?


(Joong Kun Lee) #1

is there an equivalent of Tensorflow Serving in Pytorch? More specifically, automated inference server that handles batching requests to maximize performance, switching models, running experimental models and recording performance…

tensorflow serving: https://www.tensorflow.org/serving/


(Alban D) #2

Hi,

No there is not such thing at the moment but contibutions are welcomed :slight_smile:


(Joong Kun Lee) #3

okay, thanks :slight_smile: