Hello, I am new to executorch and I wanted to know if it is possible to use a model created and optimized with executorch to be used in a code written in Rust for an embedded system, or is it nowadays possible only with C/C++?
Ty for your answers.
I think you are referring to the runtime. So far it’s possible only with C++.
If you wanted to call the ExecuTorch runtime from Rust code, you could try wrapping it with cxx - Rust.
After search the web a bit for a solution, I decided to write a crate for it:
https://github.com/barakugav/executorch-rs/
Whoa that’s great @barakugav! How difficult was it to do that? Do you have any suggestions for ways to improve the ET API to make this sort of thing easier?
hey @dbort, are you one of ExecuTorch maintainers?
It wasn’t super hard to do, but took me a few days in total. Most difficulties arose because its hard to write bindings to c++, and its much easier to do for c api. The API itself is very straight forward, and kind of minimal, which is great for what I tried to do. Few concrete points:
- The documentation usually stated exactly which argument or field should outlive which object, which was very helpful especially when trying to express that precisely for the Rust borrow checker using lifetimes. Sometimes I had to check the code to make sure or if the docs were missing.
- The compilation flags to cmake are very straight forward, but I did not find any documentation for them.
- I think there are some errors in the docs about the generated static libs, maybe it worth creating a detailed page for it in the tutorial.
- ArrayRef and Span are great, i would love to see use for these in places that expect
const chat *
. When called from Rust (that does not have\0
at the end of strings) it require additional allocation. - I had to solve some very weird phenomenas when calling cpp from Rust: for small Result objects, usually with 16 bytes total, the bytes were altered (??) between returning from the cpp func to the Rust caller. Very weird. My guess is that my cpp compiler chose some unexpected ABI, maybe for optimization, and Rust didnt handle it well.
- Functions implementations in headers are not callable from Rust, as Rust bindgen only generate function declarations and it expect the implementations to be in the static lib (doesnt happen with header impls). To solve it and the previous point, I had to compile additional c++ static lib that expose a C API for these methods. Its a big limitation for templates, unfortunate.
Last items, kind of requests:
- It would be great to expose some precompiled binaries for ExecuTorch, maybe with the most common operations. Would be great for CI and for easy hello_word examples.
- As ExecuTorch target edge devices, even embedded system, I think it crucial to be able to compile without the standard library, currently I wasnt able to do so
Thank you for taking the time to write this up, it’s super helpful! I am one of the maintainers, and I’ll pass this along to other people on the project.
It would be great to expose some precompiled binaries for ExecuTorch, maybe with the most common operations. Would be great for CI and for easy hello_word examples.
This is something we want to do, similar to PyTorch’s libtorch
: a zip file with prebuilt binaries, headers, and CMake stubs for linking against them.
As ExecuTorch target edge devices, even embedded system, I think it crucial to be able to compile with the standard library, currently I wasnt able to do so
If you have more details here, could you create a github issue? We do have users who use ET on bare-metal embedded systems, but they use an internal build system instead of CMake. So, the code should be good, but maybe we need to add more control to the CMake system.
Interesting, I wonder how they link malloc for example, maybe they just write a stub and dont use it.
Anyway, opened an issue: Option to build without the standard library · Issue #4561 · pytorch/executorch · GitHub.
Thanks for the support! Keep it going