Is it possible to build a minimal libpytorch.so for c++ inference on mobile device?

Hi,

For now we plan to follow the NativeApp demo in https://github.com/pytorch/android-demo-app, to support inference on mobile devices. After OPs reduction we still get libpytorch_jni.so with size about 11M for arm64-v8a. So,

  1. Is it possible to reduce it further? I have tried turned off some cmake options including USE_MKLDNN, but it seems no effect.
  2. Is there any other clean way to build a c++ dedicated library like libtorch_cpu.so getting rid of jni stuff, to support c++ native linkage.

Thanks!

Yes, the custom build functionality allows you to build a libtorch with just the PyTorch ops you actually need.

Best regards

Thomas

Thanks for reply!

Except operator customization, is it possible to allow reducing the binary size further, e.g., via the build options?

Thanks.

Most of these should already be turned off for PyTorch mobile.

Get it, thanks a lot!

BTW, do we have existing option to support building a native library for dedicated c++ linkage, getting rid of jni stuff.

Thanks!