Would pytorch for cuda 11.6 work when cuda is actually 12.0

I will be using pytorch for a deep learning application. My computer has an NVIDIA GPU already.

I installed CUDA following CUDA Installation Guide for Microsoft Windows

Following that I installed pytorch in a conda environment with the pip command suggested on pytorch.org

pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116

However, when I check the CUDA version I installed with nvcc - version, I get

... Cuda compilation tools, release 12.0, V12.0.76 Build cuda_12.0.r12.0/compiler.31968024_0

My installation seems to be okay when I run the code torch.cuda.is_available()

as it returns True and identifies the name of the GPU, correctly.

Just wanted to check whether it is fool-proof when I actually feed in a deep learning model with a big dataset, since CUDA version is 12.0 but I installed pytorch suitable for 11.6.

Many thanks already.

1 Like

Yes, your setup will work since the PyTorch binaries ship with their own CUDA runtime (as well as other CUDA libs such as cuBLAS, cuDNN, NCCL, etc.). The locally installed CUDA toolkit (12.0 in your case) will only be used if you are building PyTorch from source or a custom CUDA extension.
The NVIDIA drivers are backwards-compatible so the newer driver (I would guess 525.60) will also work.

2 Likes

got it, thank you so much!

Oh wait… are you saying we don’t really need to do those painful installs of NVidia CUDA and cuDNN on Windows or Linux for PyTorch? PyTorch works without those installs?

1 Like

Yes, this was always the case since we shipped binaries.

1 Like

So if you don’t actually need to install Cuda toolkit since Pytorch already has install cuda for its installation, then when is Pytorch support for cuda 12 going to be available for download? The Pytorch website still only has cuda 11.7 and cuda 11.8.

The nightlies are already built, but not public yet since they need more testing.
If you don’t want to wait for the binaries you can always build from source using your locally installed CUDA toolkit.

Sorry but I am still confused with this. I have cuda installed on my pc, this is the output of nvidia-smi

I tried to install pytorch by running:
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia

but apparently there is no gpu available to pytorch:

Python 3.8.17 | packaged by conda-forge | (default, Jun 16 2023, 07:06:00) 
[GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> 
>>> 
>>> import torch
>>> torch.cuda.is_available()
False
>>> torch.cuda.device_count()
0
>>> 
>>> 
>>> torch.__version__
'2.0.0'
>>>

I know I have cuda 12.2 and I try to install pytorch with cuda 11.8 but if I understand well, that is not relevant since pytorch comes with its own binaries for cuda etc…What am I doing wrong please?

What does torch.version.cuda return? If it’s None you’ve installed the CPU-only binary.

Yes, torch.version.cuda doesnt return anything, so yes it is None. This is the command I need to run, right? conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia

Ok, I will try again, I am almost certain I ran the command above, but who knows, maybe I did some mistake…I will remove the environment and start from a fresh one. Anyways, thanks for your help!

Ok, for some strange reason pytorch with cuda activated doesnt work for me when I try the conda command. This is the package plan,

but still this what I get

Python 3.8.0 | packaged by conda-forge | (default, Nov 22 2019, 19:11:38) 
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> 
>>> 
>>> import torch
>>> 
>>> 
>>> torch.version.cuda
>>> torch.cuda.is_available()
False

However when I try the pip command, then everything looks fine! Below is my pytorch that got installed with pip

Python 3.8.0 | packaged by conda-forge | (default, Nov 22 2019, 19:11:38) 
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> 
>>> 
>>> 
>>> import torch
>>> 
>>> 
>>> torch.cuda.is_available()
True
>>> 
>>> torch.version.cuda
'11.8'
>>>

I dont know why that happens, I have to say however that I am using the Mambaforge distribution and something might be happening with the package manager which presumably is compatible with conda but maybe it is not in this particular case.

I think I will remove Mambaforge entirely, install anaconda and try again,

Im back here again. Removed mambaforge, installed minicoda and then the gpu version of pyrorch. everything is fine,

Python 3.8.17 (default, Jul  5 2023, 21:04:15)                                                                                       
[GCC 11.2.0] :: Anaconda, Inc. on linux                                                                                              
Type "help", "copyright", "credits" or "license" for more information.                                                               
>>>                                                                                                                                  
>>> import torch                                                                                                                     
>>>                                                                                                                                  
>>> torch.version.cuda                                                                                                               
'11.8'                                                                                                                               
>>> torch.cuda.is_available()                                                                                                        
True                                                                                                                                 
>>>

I guess mamba is to blame then…