How to get the minimal subset of libtorch for inferencing in C++?

If I just want to use libtorch for inferencing with a saved model in C++, how to get the minimal subset of libtorch for the task? The whole libtorch is so large.

1 Like

Hi,

To the best of my knowledge, this is not possible at the moment. This is mainly because we have a dynamic dispatcher and so which part of the lib will be used is not known at compilation time.
That being said, I think there is work towards this as this would be very useful for the mobile build as well. But I don’t think there is any solution yet.

2 Likes

The documentation seems to suggest there will be a way in the future:

“As of PyTorch 1.3, PyTorch supports an end-to-end workflow from Python to deployment on iOS and Android. This is an early, experimental release that we will be building on in several areas over the coming months: … Build level optimization and selective compilation depending on the operators needed for user applications (i.e., you pay binary size for only the operators you need)”

However, it doesn’t look like there’s an easy way to actually do so right now, I’d love to hear from anyone who’s succeeded. Following the android tutorial gives an app size (sans model) of around 70MB, clearly too large for most applications.