PyTorch Mobile: current status

I noticed that GitHub - pytorch/ios-demo-app: PyTorch iOS examples and GitHub - pytorch/android-demo-app: PyTorch android examples of usage in applications had been recently archived, but couldn’t find relevant discussions on GitHub and this forum. I wonder if pytorch mobile (Home | PyTorch) will continue its development, or should we use onnx/onnxruntime instead?

Best,
Ryuichi

3 Likes

@ptrblck could you shed some light on a recent development? The Pytorch Mobile iOS and Android Demo apps have been archived, with no communication at all about reasons. This move raises concerns about the future of TorchScript.

Hower, based on what I’ve read in another post here Consequences of PyTorch 2.0 on libtorch?:

[TorchScript] is not deprecated. We are not actively developing torchscript, but it will be supported for the foreseeable future, and you’ll be able to use torchscript based backends behind dynamo.

I’m not involved in PyTorch Mobile development and don’t know what their support and roadmap is.
Generally, I would be careful with TorchScript dependencies as it’s in “maintenance” mode and I would not expect to see major improvements or fixes anymore, as torch.compile is being prioritized.

Alright thank you for the insights.

Hi,
So PyTorch mobile, via lite interpreter that is based on torchscript, is also in maintenance mode. There is very limited work being done on it atm. We are actively working on our new stack for edge, called ExecuTorch, that was released in PyTorch conference 23. Typing from phone so cannot fine and paste links. But will update later.

2 Likes

This link is showing a demo of ExecuTorch in case you were looking for this one.

Hi all.

Could you clarify what “in maintenance mode” means? The LibTorch-Lite Cocoapod is no longer being published, the org.pytorch:pytorch_android_lite package on Maven Central is no longer being published, and there are no official docs on how to build the missing packages that I can find.

ExecuTorch only works with a smaller set of operators, so if one is trying to do inference with a more complicated model (e.g. a beam search decoder), should they focus on ONNX rather than trying to revive LibTorch by hand?

It would mean that no resources will be spent on improving or fixing TorchScript anymore. With that being said, I’m sure exceptions can be made depending on the scope of a regression, but eventually it comes down to the code owner’s decision.