Cross-platform PyTorch stand-alone executables

At this stage of my project, I need to compile my code to binaries that work under Windows and / or Ubuntu. I know that jit is a great mile stone in PyTorch, but, I am not sure if this is the correct approach that I can use to achieve my objectives; not to mention the complexity and effort needed to use jit; or PyTorch v1.0 C++ Front-End. That said, there are some tools to convert Python to binaries, for example, PyInstaller , which are not compatible with PyTorch. Are there any other alternatives to do PyTorch code compilation?

Thanks in advance

What difficulties are you seeing with TorchScript (aka jit)? We are trying to make it as easy to use as possible for models defined in Python with PyTorch, so if you’re having issues please let us know. It appears that PyInstaller includes a Python runtime inside the binary, whereas TorchScript and the C++ frontend both have no dependency on Python and can be run in multithreaded environments with no GIL.

As for TorchScript vs. the C++ frontend, that’s a more personal choice as they both rely on libtorch and the difference mostly comes down to how you want to define your model (in Python + TorchScript vs in C++), so it’s hard to say without more information on your use case.

1 Like

Thanks for your answer, which saved me putting lots of sparse effort.

To be honest, I have not tried jit yet; but I need to know which path to take before going forward with it.

From your answer I can say, as I am using PyTorch, then, it’s better to use jit.

@driazati, but if one needs to (1) train C++ and then (2) serialize model and then (3) load in C++ on the production, do you have to go through TorchScript in (2)? (Ok, I know that there’s ONYX)

The C++ frontend is a high level API that provides a similar experience as using PyTorch’s Python API, as such it has its own serialization mechanisms, so you can iterate on your model all in C++, this example shows how.

TorchScript lets you go from PyTorch models coded in Python to something that you can load in C++, so it’s not necessary for that case.

Thank you @driazati, but it’s not clear what torch::save do. From API doc seems like full serialization of Module or Tensor, but the comment in example states “checkpoint” (state_dict)? What is the method of serialization, is it something similar to Tensorflow ProtoBuf? Are there aby limitations?

We have many serialization formats and they’re all different and easy to mix up. We’re working on ways to fix the UX here but for now:

  • in eager mode Python lets you save models so they can be loaded in Python
  • torch::save() in the C++ API lets you save models so they can be loaded in C++ with torch::load()
  • in eager mode Python lets you save models that have been compiled with TorchScript so they can be loaded in Python with torch.jit.load() and in C++ with torch::jit::load()

@driazati, thanks, but from your answer it’s still unclear what torch::save() saves: only “weights” or whole model (Module).
Definitely at least doc should be more precise. And to be honest this jit thing is also a little confusing, because this is kind of another world. E.g. in Tensorflow if you load .pb you end with normal tf.Graph not something like tf.pb.Graph :slight_smile: