Why deployment is favored in Tensorflow?

Helloo,
I have a question of bit general. I am wondering why people say TensorFlow is better than PyTorch in deployment? Because I see a lot of companies integrating their deep learning models in their base code using Pytorch!
Because at the end all you need is an input, and you shall give back an output for further processing. Botch frameworks support multi-gpo/cpu computing. So the question is why, or what do people actually mean by deployment? it would be better If someone gives a concrete example with technical details.
Thank you

A few reasons that I am aware of:

  1. pytorch APIs for optimizing and otherwise preparing a model for production are still fairly new and rough around the edges
  2. FB described themselves as relying on caffe for production inference – but caffe seems to have been “merged” into pytorch in 2018 and the result for us the end users seems to be a somewhat confused mix of possible runtimes (or “backends”): python, JIT traced, JIT scripted, ONNX. If you quantize a model you might not be able to export it to ONNX. Or even save it. And so on – things just don’t feel properly cooked yet.

Because at the end all you need is an input, and you shall give back an output for further processing.

Hmm, a lot of people want more. Can you run the above on mobile? How about a production environment that doesn’t have python at all?

1 Like