No libtorch_cuda_cpp.so available when build pytorch from source

Dear community,

I build pytorch from the source following instructions from github webpage using latest pytorch (commit 7c20ad3dfae18c5e262c59e09c2b6079ec7fc69f). The build was successful and pytorch python API is able to detect CUDA devices.

However, thereis no libtorch_cuda_cpp.so available under …lib/python3.8/site-packages/torch/lib/ which should exists if install pytorch from pip or conda.

shared libraries after build from source:

ls .conda/envs/buildpt/lib/python3.8/site-packages/torch/lib/libtorch_cuda* 

.conda/envs/buildpt/lib/python3.8/site-packages/torch/lib/libtorch_cuda.so  
.conda/envs/buildpt/lib/python3.8/site-packages/torch/lib/libtorch_cuda_linalg.so

shared libraries after install from pip/conda

ls .conda/envs/pytorch/lib/python3.8/site-packages/torch/lib/libtorch_cuda* 

.conda/envs/pytorch/lib/python3.8/site-packages/torch/lib/libtorch_cuda.so       
.conda/envs/pytorch/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cu.so 
.conda/envs/pytorch/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cpp.so 
.conda/envs/pytorch/lib/python3.8/site-packages/torch/lib/libtorch_cuda_linalg.so 

How to build these missing shared libraries?

Note that your source build isn’t missing these libraries, but should create a single libtorch_cuda.so lib instead of the split libraries.
However, if you really want to split them (we needed to use it to avoid linker errors due to the size of the lib), use BUILD_SPLIT_CUDA=ON python setup.py install.

2 Likes

Thank you for the quick answer. I can see split libraries now :slight_smile:

@ptrblck
why
BUILD_SPLIT_CUDA=ON python3 setup.py bdist_wheel && pip3 install dist/*.whl
doesn’t split the libraries?
(I’m using mmcv + mmdet and they require the split libraries)
Thanks

@ptrblck
BTW, How to generate libtorch_cuda_cu.so with torch 2.0 release(without compile from source code)?

You don’t need to generate this file and it won’t be built by default anymore as library splitting isn’t used in the current release if I’m not mistaken.

Thank you for the answer!

it seems tensorrt needs this file when running. any advice to get it on install when not compiling from source?

I doubt a compatible TorchTRT version needs files that are not built, so could you give more context why this should be the case?

i just made this post about it. but im also not 100% that my compatability is correct. tried torch 1.13 on a pip recommendation but that causes the boilerplate not to run properly which i havent been able to debug.

Does setup.py support cross compiling libtorch.

I don’t think the ability to cross compile is related to the setup.py and rather depends on your build toolchain. I’ve heard some users were able to do so, but I usually just compile on the native target platform.

thanks for your reply. Recently i have used cmake to cross compile libtorch directly, and i have my own toolchain file. Now i am confused which code is related with toolchain in setup.py or build_libtorch.py. I’d like to change the toolchain code line of pytorch source code with my cross compilation toolchain information and try using setup.py or build_libtorch.py to cross compile the source code. thx.