Statically linking to libtorch but also needing the shared library ""?

I’m building a plugin for a commercial software application that is internally using libtorch. This plugin, which is in the form of a shared library (.so file), is statically linking libtorch (i.e. it’s baking all of it into it’s own so-file). The only libraries that are dynamically linked against are the CUDA libs and some standard system libraries of course. This plugin needs to be portable to other systems without the need to install lots of dependencies (CUDA is fine to be needed as an additional install).

Everything works, except I can’t get my shared library to not also need the torch library “”. Since it doesn’t exist a static version of this library “libcaffe2_nvrtc.a”, I can’t seem to get around it. Is there a way to not have libtorch dynamically depend on this library and still have GPU support? I can build libtorch with CPU support only, and then I don’t have the problem, but I need the GPU acceleration.

My guess here is that the libcaffe2_nvrtc library can’t be made static since it directly depends on CUDA which doesn’t exist statically, but that is only my guess, I and would like somebody more knowledgeable to confirm that this is the case. If so, I guess I just have to live with being forced to also distribute the file “” together with my plugin, right? Any other suggestions or solutions?

Cheers, David

I guess you are right. Can you please open an issue in pytorch github here?
we will have build expert helping you.

Thanks @glaringlee,
I’ve posted an issue here now:


We will triage this soon, please track the update in the issue