Does JIT makes model faster?

Is there any JIT performance measurements? Does it makes a model any faster or the only benefit of involving JIT is ability to save model and perform inference in any other environment except python?

Yes, we do monitor the performance of certain bits. For example the recent PyTorch blog on RNN speedups discusses benchmarks we’ve been monitoring quite closely and continue to work against. ResNet performance it also regularly checked.

That said, whether any given model sees significant speedups, depends.

  • I always give the ballpark figure of 10% speedup for moving from Python to C++ - I got this number from a couple of specific models, e.g. when you do a “1-1” translation into C++ of the LLTM model used in the C+±Extension tutorial. Your model will see different numbers. A similar speedup probably is there for the JIT.
  • Where the JIT really get large speedups is when one of the optimizations can fully come into play. E.g. if you have chains of elementwise operations, they will be fused into a single kernel. As those are typically memory-bound, fusing two elementwise ops will be ~2x as fast as doing them separately.

Best regards

Thomas