I found some mostly unanswered questions in this forum that are months old and I just cannot get a good handle on the libtorch concept. Is libtorch going to get all the functionality of caffe2 eventually and then the deprecation will happen?
1) libtorch introduces yet another Intermediate representation with no way to load onnx or other pretrained models or convert, other than a multi-stage conversion walking it thru python.
2) The now similar name means you CANNOT search for c++ API topics without being inundated with python only responses. Also there seems to be Torch the Lua based to show up in searches also. (Searches outside this and linking back to this forum too)
3) I am trying to find a c++ cross platform inference library/framework. I am interested in using pre-trained models from c++ in desktop and embedded environments and so learning has been done/is to be done in a bigger more capable workstation. It is very difficult to wade thru the Python-Centric Data scientist learning side and tease out the C++ production side. (On a plus side mxnet, tensorflow do not have prebuilts for windows and after 40+ hours of attempting to build them… I know why.)
So the question still stands, Is libtorch going to be a scaled down interface or is there a realistic effort to keep C++ a 1st class citizen like it was/is in caffe2.
Said another way, Can I still use the caffe2 features and models on the assumption that libtorch will have an equivalent before deprecation? or would that be shooting myself in the foot
A few weeks ago, @dambo had a reasonably constructive take on concerns that seem to overlap yours and got quite a few responses including from some of the PyTorch/libtorch devs:
I would only recommend using libtorch if it sparks joy. Given the concerns you listed and your dissatisfaction with the naming, maybe a framework with a stronger branded name that offers commercial support with some sort service level agreement would serve your needs best.
I did find that post AFTER my posting and cannot agree more. it APPEARS like …
The fundamental question, for me is still not answered. Is this deprecation the death of caffe2 or not? Is the migration path going to happen gracefully or rudely.
I do not know if the C++ used in PyTorch is completely different than caffe2 or from a common ancestor. I know it said it was “merging”. So architectural details may be helpful.
After my initial test with python on 5 or 6 different frameworks it was really a slap in the face to find how poorly c++ is supported. So far caffe2 looks best but then the red flag goes up on “Deprecation” and “Merging” and what that exactly means.
Has (recent)c++ prebuilts | C++ buildable | Handle PreBuilt Models | HW Acceleration |
MXNET NO | Not So Far | Yes | Fastest Cuda impl |
TensorFlow NO | Not since change to bazel | The pgm is the model | Slower |
caffe2 Yes | Yes | Yes lots of them | yes |
libtorch Yes | Yes | Only TorchScript | Yes |
OpenVino Yes | Yes | Yes lots | Yes, Intel only |
Am I wrong about any of this assessment?
Forgive me “SLA” ???
Naming is trivail, just wanted to point out how that can contribute to the impression of poor c++ support, gets washed out by the plethera of python info.
Thanks for your reply.
What is the FB’s long term strategy regarding the deployment of PyTorch models for production?
In some cases, I can not fathom using Python for run-time inference (even though this is my/others adopted strategy) and would love to know C++ is indeed on the table for such circumstances wherein Python is not applicable.