Pytorch 2.0 C++

It seems the only big thing in pytorch 2.0 is torch.compile which is just a “make faster” function like torch.mobile_optimizer.optimize_for_mobile.
That’s great but my worry from the motivation behind it is that they’re giving up on first class support for C++. I don’t really care about training in C++ but inference is of massive importance to me and i’m sure to others. I hope they won’t relax the support for ONNX export. Ideally, it would be great if there was a “libtorch-lite.a” for inferring torchscript or torch.compile-ed models using a very minimal library. That would be awesome. At the moment i have to use onnxruntime, which is fine, but it would be great if i could infer torch models directly in C++ without having to use the massively bloated, and impossible to cross compile, libtorch.so.

Anybody else worrying that using pytorch models in C++ will slowly become harder and harder to do?

The export functionality is in the works, keep an eye out on Raziel’s and Suo’s talks at Dev con to learn more https://www.youtube.com/@PyTorch/videos

Yes, I’m worrying about that as well. And I actually do care about training in C++.

It would be nice if the repercussions of moving pytorch “back into python” was recognized as an issue for c++ developers and addressed in a comprehensive discussion.