Light weight libtorch-cxx11 for C++ projects

Hi all,
I have a C++ based project (that compiles in Windows, MacOS and Linux). I have a pre-trained model in pytorch and I am performing inference (only inference and no training) using this model in my C++ project. When I distribute a standalone installer for this C++ project, I also have to ship the pytorch dependencies (libtorch-cxx11-abi-shared-with-deps-1.3.0.zip) with my software. The current size of pytorch dependencies are 1.4GB. My application size is around 2MB.

Is there any possibility of creating a pytorch distributable library that only contains the minimal code required for inference (no training)? I feel that 1.4GB of dependency is too big and unnecessary if I only want to perform inference.

I would really appreciate if the members of this forum provide me with pointers and insight on how I can embed pytorch inference functionality in my C++ projects without the burden of 1.4GB of libtorch dependency.

Abhishek

If you drop GPU support, you save some. However, as long as you want MKL, you have a pretty large chunk that won’t go away.
You could imitate the android builds to get a small footprint.

Take for example Tensorflow Serving. Docker image for CPU is 50 MB, but for GPU 1 GB :slight_smile:

And libtensorflow_cc.so compiled with CUDA and TensorRT ~ 0.5 GB

Thank you. The “android builds” pointer is a very useful.

@tlm @tom Can you please provide me some hints of how this android builds helps you. I have checked the libs folder in my apk.debug. I got three .so files but how will I use it without .h file? Stuck in same situation as you were. Can you please help me as I need to showcase this as my College Projects.Please help me!!!.