How to serialize models with torch.compile properly


Despite the main points in the torch.compile pitch, we faced some issues with jit, but they were tolerable, and we adopted and torch packages as a model serialization / obfuscation / freezing methods (and ONNX as well).

It may be seen as a disadvantage, but sharing single .jit or .package artefacts may be preferrable to sharing the whole model codebase and then running torch.compile.

So, I have a few general questions, which I believe were not yet clearly anwered in the docs / blog / release materials:

  • What is the preferred serialization format for torch.compile-ed models?
  • How does JIT compiler, .jit format, torch packages play with compiled models?
  • Should I first compile the model, then export to jit or package, or vice versa, or am I getting it wrong altogether?

Or maybe it is still early to ask these questions? Maybe there are some discussions? Found this, but it seems not very informative.

Also forgive me for a dumb question, but why there are now 2 official PyTorch forums, this one, and, the latter being ~100x smaller?

There is no serialization solution yet for torch.compile but it’s high priority

Regarding the 2 forums this is more of a community forum for general pytorch users wheras the dev forum is more for pytorch contributors. IMO this place feels more like a forum and dev discuss is closer to a blog

I see, many thanks for your reply.

Hi again!

Since PyTorch 2.1 was released yesterday, I immediately rushed to to read the release notes, and still there is not much information about proper packaging.

Am I missing something, or the compile being still in BETA, packaging is still to be solved in future?


1 Like