Can I simply go to pytorch website and use the following link to download a CUDA enabled pytorch library ? Or do i have to set up the CUDA on my device first, before installing the CUDA enabled pytorch ?
pip3 install torch===1.3.0 torchvision===0.4.1 -f https://download.pytorch.org/whl/torch_stable.html
Hi,
You don’t need to have cuda to install the cuda-enabled pytorch package but you need cuda to use it.
We do not ship cuda with pytorch as it is a very big library.
thanks for the reply @albanD !
so i guess that the point that you are trying to make here is that i can install CUDA enabled pytorch without having to install CUDA… is just that pytorch wouldnt use the GPU that way. Eventually, for pytorch to make use of the GPU, i will have to install CUDA.
pls correct me if i am wrong
That’s the right idea.
You can use a cuda build on cpu even without cuda. But to be able to use the GPU, you will need to install CUDA.
thanks a lot @albanD for helping me out !
ok so i did install torch using the following command
conda install pytorch torchvision cudatoolkit=10.1 -c pytorch
but as expected, I am not able to use the GPU
So i want to know what exactly do i need to do so that
‘torch.cuda.is_available()’
returns me True ?
Do you have your gpu properly installed?
Do you have recent nvidia drivers for it?
What is the exact error message that you get when doing torch.rand(10).cuda()
?
hi @albanD !
How do i check if my GPU is properly installed ? Or how do i chekc if i have the NVIDIA drivers for it ? Can you please help me with all this…? Its all a little confusing : o
And I did try runing the code that you wanted me to run… Following is its error message
The NVIDIA driver on your system is too old (found version 10000).
Please update your GPU driver by downloading and installing a new
version from the URL: http://www.nvidia.com/Download/index.aspx
Alternatively, go to: https://pytorch.org to install
a PyTorch version that has been compiled with your version
of the CUDA driver.
In a command line, you can run nvidia-smi
that should show you all your GPUs.
For the driver, you can try and run the samples that are given with the CUDA install.
Given the error message, the problems seems to be that the nvidia driver (gpu driver) is too old. You might want to update that.
i solved it. and yes you were right @albanD !
my nvidia drivers were old. i just updated the nvidia drivers by going to Start>Device Manager>Display adapters> select_ur_gpu >Right Click>Update Driver
Thanks a lot
Just curious, is the same true for cuDNN? I.e. does a user need to manually install cuDNN before CUDA enabled PyTorch will work or does the PyTorch installer do this for you?
The conda binaries and pip wheels ship also with the cudnn library so you don’t need to install it (same for NCCL).
Hello, I was following your discussion. I have updated GPU driver. Installed driver shows CUDA 11.2 .
While installing pytorch
conda install pytorch torchvision cudatoolkit=11.2 -c pytorch, it throws package not found error. Although i could install cudaroolkit=10.1 without error, I am still NOT able to use GPU with pyrorch. With the suggested [torch.rand(10).cuda()] I get [AssertionError: Torch not compiled with CUDA enabled] I donot understand where is the problem . I am wondering if I need update system path with for the cuda 10.1 ?
Hello albanD, I have updated GPU driver to the latest one 461.40. It shows the required CUDA version is 11.2. I am confused with the following.
- Should I need to install CUDA11.2 and set path accordingly before running conda pytorch torchvision…?
- Can I install any lower version of CUDA for the updated GPU driver?
thank you.
The conda binaries and pip wheels are not yet built with cudatoolkit=11.2
and you would have to use 9.2, 10.1, 10.2, or 11.0 as given in the install instructions.
Since these binaries ship with their own CUDA runtime, you would only need a local NVIDIA driver corresponding to the CUDA runtime you are selecting.
In case you want to build PyTorch from source or any custom CUDA extensions, you should install a matching local CUDA toolkit.
Hi
I am not sure if this discussion is still valid.
I have a confusion whether in 2021 we still need to have CUDA toolkit installed in system before we install pytorch gpu version or does conda installs the toolkit as well by “conda install cudatoolkit=11.1” ?
And if conda installs the toolkit does pip3 also does that? even though toolkit is not explicitly mentioned in the following command:
“pip3 install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html”
This wasn’t the case before and you would still only need to install the NVIDIA driver to run GPU workloads using the PyTorch binaries with the appropriately specified cudatoolkit
version.
One limitation to this is that you would still need a locally installed CUDA toolkit to build custom CUDA extensions or PyTorch from source.
Yes, but the pip wheels are statically linking it instead of depending on the conda cudatoolkit
.
I understood that we nvidia drivers.
I understood that cudnn and nvcc comes with pytorch installation.
I understood that cuda version that I specify should be supported by the nvidia driver.
What is still not 100% clear is:
Do we need to install Cuda toolkit separately or is it taken care by the pip3/conda ?
Lets ignore the part of building custom cuda extensions and pytorch from source.
No CUDA toolkit will be installed using the current binaries, but the CUDA runtime, which explains why you could execute GPU workloads, but not build anything from source.
Is it still true as of today (Oct 2021)? I am using torch 1.9. It looks like my torch installation using pip install
comes with a CUDA version different from the one on nvidia-smi.