Are MPS operators rewritten?

Are the operators in the MPS backend rewritten from scratch or taken from libtorch’s MPS backend?

The MPS backend in executorch is independent of the libtorch MPS implementation.

I see, thanks for the reply!

Follow up question: Will there be feature parity with the MPS libtorch backend? Or will we get into a situation where torchscript (having access to a stable MPS backend) is deprecated and the executorch replacement is premature?

@GregoryComer Can you help here?

The Metal strategy is somewhat in flux right now. It’s definitely an important area for us. I’ll see if I can tag one of the Metal experts to give a better answer. Could you give any more info on the use cases or types of models you’re most interested in?

Hey Gregory, thank you for getting back to us. We have a wide range of models in production (Gemma, Depth Anything, Segment Anything, around 50 in total). Currently we are trying to understand why TorchScript is being deprecated while no feature-parity replacement is in place and we can do about it.

TorchScript is being deprecated in favor of torch.export, this is a PyTorch-wide evolution, not specific to ExecuTorch. The entire PyTorch ecosystem is moving in this direction.

ExecuTorch is designed as the on-device inference runtime built on top of torch.export, and it already supports a wide range of models.

For acceleration on Apple Silicon, you currently have several backend options within ExecuTorch:

  • CoreML backend: the most mature option; I’d recommend starting here. It covers a broad set of operators and integrates with Apple’s Neural Engine, GPU, and CPU.

  • Metal backend: newer and experimental, but already accelerating several voice and real-time use cases. Under active development.

  • MLX backend: just landed this week.

  • MPS backend: deprecated as of our last release and scheduled for removal in 1.4, so I wouldn’t recommend new adoption.

If you can share more about which of your ~50 models are highest priority, we can help identify the best export and delegation path for each.

1 Like

Thank you so much for this comprehensive overview! We are currently looking into which models to prioritize and will have feedback soon. :slight_smile: