As the title suggests, I have pre-installed CUDA and cudnn (my Tensorflow is using them).
The version of CUDA is 10.0 from nvcc --version .
The versiuon of cudnn is 7.4 .
I am trying to install pytorch in a conda environment using conda install pytorch torchvision cudatoolkit=10.0 -c pytorch .
However, the installed pytorch does not detect my GPU successfully.
Does anyone know if there is a way to install GPU-version pytorch with a specific CUDA and cudnn version? I do not want to change CUDA and cudnn version because my Tensorflow is using them.
Your system installations of CUDA and cudnn won’t be used, if you install PyTorch binaries with these libs.
E.g. conda install pytorch torchvision cudatoolkit=10.1 -c pytorch will install CUDA10.1 and cudnn in your current conda environment.
What kind of error message are you seeing and which GPU are you using?
I didn’t see any error message. I just couldn’t use my GPU, meaning that print(torch.cuda.is_available()) gives me False. My GPU is 2080 Ti using 410.xx driver.
If I install by conda install pytorch torchvision cudatoolkit=10.1 -c pytorch will install CUDA10.1, does that mean I will use CUDA 10.1 in the conda environment instead of the one in the system?
Yes, that would use the shipped CUDA10.1 version from the binaries instead of your local installation.
If you want to use the local CUDA and cudnn, you would need to build from source.
EDIT:
Note that CUDA10.0.130 needs driver 410.48 as described here.
I just checked my GPU driver version, which has no issue.
Regarding the first paragraph of your reply, does that mean I should have no problem if I install CUDA through conda and use the shipped CUDA 10.1? However, this is not the case for me since my pytorch is still not detecting my GPU even if I install the whole package from conda.