Path from research to production

I’m looking to potentially transition from TF2.0 to using pytorch for a variety of reasons but before I do (and before I pitch this idea to the group) I need to sort out some potential sticking points, the main one being the production side.

I would like to do all of my development and testing in python then deploy to use in C++. The use case is as follows

Windows standalone application running 1 or more models in inference in C++ built using MSVC. Data is opencv Mats for input to the model and output. Models are usually segmentation models (so same shape in and out) or object detection models (or instance models likes Mask-RCNN).

What would be the best path with this currently. I see alot of ways forward with libtorch, torchscript, ONNX to Azure stuff, ONNX to Caffe2. I have gotten confused at this point.

I want the simplest way with good Windows support. Tensorflow has been a headache with a lack of C++ documentation and little to no windows support (breaks constantly with releases).

I also want something that’s fast and scalable. In the future I might be running distributed, cloud, mixed precision, TensorRT etc.

It’s a fairly small codebase and team so API stability isn’t super critical.

You could script our models and load the scripted models in libtorch (the C++ frontend) for your inference part. Just to make sure your workload is exportable, you could try to script a few similar models, to the ones you are using with your team, and perform the inference in C++.

To use TensorRT, you would have to go through ONNX. This blog post might give you some ideas about the current workflow.

So using pytorch directly via it’s C++ frontend is the recommended way for production rather than using ONNX and something else (caffe2 for instance). I just remember when I first heard of caffe2 at some GTC it was more production focused.

Have there been any issue with windows support with the C++ frontend? Tensorflow has been killing me with that for years.

Thank you

Today, after caffe2 has been merged into pytorch repo and for someone who’s not used old-school caffe2, what would be the difference between “pytorch directly via its C++ frontend” and “caffe2”?

If there is an open-source platform that FB uses for their production, it would be precisely what I’d want to use for my production, too. The use cases are obviously similar.

Didn’t realize that caffe2 was merged into pytorch. Probably should have looked into after not hearing anything about it for years.

Deprecation warnings are all over https://caffe2.ai And there is the announcement at https://caffe2.ai/blog/2018/05/02/Caffe2_PyTorch_1_0.html