Is it required to set-up CUDA on PC before installing CUDA enabled pytorch?

Its taking me to install pytorch 1.10 with CUDA 11.3 almost 8-9 hours? Is that ok? or something wrong? I am using GeForce RTX 3090

Something seems to be wrong.
Installing the pip wheels / conda binaries would take seconds or minutes depending on your internet connection while building from source could take between 15-120 mins on x86 depending on your workstation, the selected architectures etc., potentially much more on embedded devices if you build natively on them.
Could you describe your install approach and what kind of output you are seeing?

@ptrblck is there a way to avoid having pytorch install the CUDA runtime if I have everything installed on the system already, but still use pre-compiled binaries?
The sizes involved here are a bit insane to me: 1GB for pytorch conda package, almost 1GB for cuda conda package, and ~2GB for pytorch pip wheels.
Why do you force the CUDA package requirement on the CUDA-enabled pytorch conda package ?
I’d like to use pytorch in CI, but given the sizes involved here (and time for compilation from source), I’m not sure I want to use pytorch at all anymore

The binaries ship with the CUDA runtime for ease of use, as often users struggle to install the local CUDA toolkit with other libraries such as cuDNN and NCCL locally.
To use the GPU on your system in PyTorch you would thus only need to install the correct NVIDIA driver and one of the binary packages.

Building from source would use your local setup and would of course avoid having to download the binaries.

1 Like

I would be more interested in knowing if we need cudatoolkit=11.1:

conda install -y pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia

If CUDA is already installed?

Yes it’s needed, since the binaries ship with their own libraries and will not use your locally installed CUDA toolkit unless you build PyTorch from source or a custom CUDA extension.

2 Likes