No libtorch_cuda_cpp.so available when build pytorch from source

Dear community,

I build pytorch from the source following instructions from github webpage using latest pytorch (commit 7c20ad3dfae18c5e262c59e09c2b6079ec7fc69f). The build was successful and pytorch python API is able to detect CUDA devices.

However, thereis no libtorch_cuda_cpp.so available under …lib/python3.8/site-packages/torch/lib/ which should exists if install pytorch from pip or conda.

shared libraries after build from source:

ls .conda/envs/buildpt/lib/python3.8/site-packages/torch/lib/libtorch_cuda* 

.conda/envs/buildpt/lib/python3.8/site-packages/torch/lib/libtorch_cuda.so  
.conda/envs/buildpt/lib/python3.8/site-packages/torch/lib/libtorch_cuda_linalg.so

shared libraries after install from pip/conda

ls .conda/envs/pytorch/lib/python3.8/site-packages/torch/lib/libtorch_cuda* 

.conda/envs/pytorch/lib/python3.8/site-packages/torch/lib/libtorch_cuda.so       
.conda/envs/pytorch/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cu.so 
.conda/envs/pytorch/lib/python3.8/site-packages/torch/lib/libtorch_cuda_cpp.so 
.conda/envs/pytorch/lib/python3.8/site-packages/torch/lib/libtorch_cuda_linalg.so 

How to build these missing shared libraries?

Note that your source build isn’t missing these libraries, but should create a single libtorch_cuda.so lib instead of the split libraries.
However, if you really want to split them (we needed to use it to avoid linker errors due to the size of the lib), use BUILD_SPLIT_CUDA=ON python setup.py install.

1 Like

Thank you for the quick answer. I can see split libraries now :slight_smile:

@ptrblck
why
BUILD_SPLIT_CUDA=ON python3 setup.py bdist_wheel && pip3 install dist/*.whl
doesn’t split the libraries?
(I’m using mmcv + mmdet and they require the split libraries)
Thanks