What is the recommended way to use PyTorch models in C++ for deployment?
I have read different articles on the forum but could not find any conclusion. Assuming that development effort is not an issue, will the pure C++ model be faster during training and inference compared to a Python model converted using TorchScript and then loaded in C++ application?
Are there any limitations to the Python -> TorchScript -> C++ approach compared to pure C++ models?
Also, what is the long term support plan by PyTorch team for TorchScript vs C++ frontend (to better understand the recommended approach)?