Can I "link" pytorch to already installed CUDA

Hi all,
I have a DEll XPS laptop with GeForce GT 550 M Graphic card. OS is Ubuntu 18.04 LTS.

I was unable to install Pytorch with Cuda, because it prints “Your GPU too old, CUDA is not supported”,
so I installed CPU-only configuration of Pytorch.

But, I have CUDA installed separately with no problem.

Can I somehow “link”/“connect” Pytorch with separately installed Cuda ?

Let me add that I was able to install tensorflow and it uses my graphic card

Thanks in advance

I think you need to install PyTorch from source for using/linking your own CUDA version installed on your system. Have a look here: GitHub - pytorch/pytorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration

But first, you may want to try to upgrade your graphics card drivers if you haven’t done so already. Then, try to install the PyTorch version with the lowest cuda version (i.e, conda install pytorch torchvision cuda80 -c pytorch)

Hi Sebastian, thanks a lot for your reply and link.
I’ve already have latest nvidia drivers for my card Cuda 9.1 installed. Regarding.your suggestion to install PyTorch with lowest cuda version: if I am succesful, does it mean I’ll have two cuts versions installed simultaneously on my system, current 9.1 which is used by tensorflow, and lower one which will be used by PyTorch. Will there be a potential conflict between the versions, or they can coexist.
Regards

If you have cuda 9.1 installed and it works with TensorFlow, then it sounds like your card should at least support cuda 9.

But I assume you have tried

conda install pytorch torchvision -c pytorch

and it didn’t work?

Will there be a potential conflict between the versions, or they can coexist.
Regards

No, don’t worry, there won’t be a conflict if you don’t install PyTorch from source. If you use the pip or conda installer, PyTorch will come with it’s own separate cuda and cudnn bundle. This will be kept entirely separate and only used for PyTorch. This is by design to make the installation easier (this is also the reason why the pytorch binaries are so large). The only thing you may want to do is to update your graphics card driver to the latest version.

1 Like

Hi Sebastian,

Thanks a gain for your prompt response, I did follow your advice and going the way of lesser resistance, I simply use used
conda install pytorch torchvision -c pytorch
, which you recommended. The installation was successful, but when I tried torch.cuda.current_device() I got a warning

anaconda3/lib/python3.6/site-packages/torch/cuda/init.py:117: UserWarning: **
** Found GPU0 GeForce GT 550M which is of cuda capability 2.1.

** PyTorch no longer supports this GPU because it is too old.**

When I tried to run a python script, it ended with

RuntimeError: cuDNN error: CUDNN_STATUS_ARCH_MISMATCH

I found out that CUDNN doesn’t support 2.1 Cards. May be it uses buit-in info to determine support and 2.1 is an “official” version for GT 550M, but the fact is installation with Cuda 8.0 was succesful, and as I mentioned I even can install use Cuda 9.

It looks PyTorch has some sort of mess with versions. I think even if I manage to install pytorch from sources and link it with separate installation of cuda 9, it will still be failing in CUDNN when checking the card support. I’ll try to contact PyTorch support for clarifications

Many thanks for your help

I see your reply and be surprise about that.

PyTorch will come with it’s own separate cuda and cudnn bundle. This will be kept entirely separate and only used for PyTorch.

I hava a new PC, and I don’t install the cuda but when I install the pytorch with pip command , the “cuda_isavailable()” returns “Ture”,So I was thought the pip will install the cuda and cudnn automaticly.
But when I input “nvcc -V” in CMD to inspect the cuda,the does’t work! I am confused!
But someone syas that pytorch does’t contain the cuda and cudnn ,I don’t know who is right!
Are you sure about that ?
Thanks !

The PyTorch binaries will ship with the CUDA, cudnn, etc. runtime libs, so that you can use it in PyTorch directly.
The binaries will not install the complete CUDA toolkit on your machine with the compiler.

1 Like

Thank you very much!
But I install the pytorch with pip, is it " PyTorch binaries "?
Thanks again!

Yes, pip will install the PyTorch wheel and based on the command you were using (from here), you would get the necessary libs to run PyTorch on your GPU.

2 Likes

Thank you very much!
Tensorflow can’t use the CUDA and cuDnn in PyTorhc,right? So if I want to learn tf, I should install the CUDA and cuDnn manually right?

I have another confusion, I use tensorboard to create an events,

from torch.utils.tensorboard import SummaryWriter

if __name__ == '__main__':
    write=SummaryWriter("log")
    write.add_scalar("test",1,1)
    write.add_scalar("test",2,2)
    write.close()

but How can I run it? When I input “tensorboard --logdir==‘logs’”, it told me has no exe named tensorboard.
Thanks !

I don’t know, how TensorFlow uses CUDA and cudnn and if they are packaging it with their wheels/binaries.

I’m not using tensorboard personally, but would guess you would need to install the tensorboard binary to run it.

I can run this code,and generate the file correctly.

from torch.utils.tensorboard import SummaryWriter

Does pytorch binary include the tensorboard?
If not, why I can use it?
Or are there something eles can instead it?
Thanks!

Based on the docs, it seems you would have to install tensorboard to use it from your terminal. PyTorch seems to ship only with the necessary tensorboard utilities to create the logs, not to visualize it in the browser.

This can then be visualized with TensorBoard, which should be installable and runnable with:

pip install tensorboard
tensorboard --logdir=runs
1 Like

I try it ,but it doesn’t work, .
Thank you !

The PyTorch binaries will ship with the CUDA, cudnn, etc. runtime libs, so that you can use it in PyTorch directly.

Are there any change here?
Are there some option to link to the system CUDA, cudnn, etc. runtime libs without building from source?

The current pip wheels use the CUDA PyPI wheels as their dependencies. We are specifying the used versions explicitly, so pip install torch will download these libs directly, but you could try to install newer versions e.g. via pip install nvidia-cudnn-cu12==8.9.7.29 etc. as they should be backwards compatible.
The safer way would be to build PyTorch from source.