How to distribute libtorch with cuda?

(Learned Lately) #1

I have a C++ application for Windows and Linux that detects if a cuda-capable GPU is available using torch::cuda::is_available(). If one is available, it uses cuda for a dramatic speedup. If it is not available, it falls back to the CPU. I want to distribute my app with all the necessary cuda libraries so that the user does not need to install cuda. What are the minimal libraries I need to distribute? I linked to libnvrtc and libcuda.

Using ldd on Linux shows that my executable depends on:

Each of these seem to have a lot of dependencies. For example, depends on: => /lib/x86_64-linux-gnu/ => /lib/x86_64-linux-gnu/ => /lib/x86_64-linux-gnu/ => /lib/x86_64-linux-gnu/ => /lib/x86_64-linux-gnu/ => /usr/lib/x86_64-linux-gnu/
/lib64/ depends on: => /lib/x86_64-linux-gnu/ => /lib/x86_64-linux-gnu/ => /lib/x86_64-linux-gnu/ => /lib/x86_64-linux-gnu/ => /lib/x86_64-linux-gnu/

Do I need to distribute all of these shared libraries?

(Geirrastad) #2

I would probably not go down that road. As you say your executable depends on two libraries, and those depend on others. But those again may depend on even more libraries… And they all have to be on a certain version level. In Linux I would just download the CUDA repo package from NVIDIA, install it with the public key and then do ‘apt install cuda libvnrtc9.2’ in an install script.

Then you have libtorch, which again depend on external libraries.

But in theory, you could just recursively apply ldd to all entities and write up a list of libraries to deploy. Then you will have a list of minimum libraries to include.
Now you need to list out all libraries already installed on the target system and skip them during install. But one of those libraries is not compatible with your version …
And last, not all of the licences might allow redistribution by a third party.