Usingng torch on machienes with globally installed cuda runntime

Nvidia ships cuda runntime installtion and respective docker containers with installed cuda runntime binaries in /usr/local/cuda .

The torch pip package under linux declares the exact same runntime libraries as pip packages dependecies with binary wheels from nvidia .

Thus on a preconfigured system with the cuda runntime installing torch from pip downloads a 2gb cuda runntike twice

Is there a a way to install a cuda compatible pip package without pulling the nvidia pip package dependencies ?

No, since your locally installed CUDA toolkit wouldn’t be used at all. If you want to use your locally installed CUDA toolkit you can build PyTorch from source.