It seems the only big thing in pytorch 2.0 is torch.compile
which is just a “make faster” function like torch.mobile_optimizer.optimize_for_mobile
.
That’s great but my worry from the motivation behind it is that they’re giving up on first class support for C++. I don’t really care about training in C++ but inference is of massive importance to me and i’m sure to others. I hope they won’t relax the support for ONNX export. Ideally, it would be great if there was a “libtorch-lite.a” for inferring torchscript or torch.compile
-ed models using a very minimal library. That would be awesome. At the moment i have to use onnxruntime, which is fine, but it would be great if i could infer torch models directly in C++ without having to use the massively bloated, and impossible to cross compile, libtorch.so.
Anybody else worrying that using pytorch models in C++ will slowly become harder and harder to do?