Pytorch not getting compiled with GPU when using conda install

Your local CUDA toolkit won’t be used unless you build PyTorch from source or a custom CUDA extension, since the binaries ship with their own CUDA runtime, cuDNN, cuBLAS, NCCL, etc. dependencies. You would only need a properly installed NVIDIA driver to execute PyTorch.
The posted command works for me in a new environment (using Python 3.8) and installs the right packages:

conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
...
The following NEW packages will be INSTALLED:
...
  pytorch            pytorch/linux-64::pytorch-1.13.1-py3.8_cuda11.7_cudnn8.5.0_0
  pytorch-mutex      pytorch/noarch::pytorch-mutex-1.0-cuda
  requests           conda-forge/noarch::requests-2.28.2-pyhd8ed1ab_0
  svt-av1            conda-forge/linux-64::svt-av1-1.4.1-hcb278e6_0
  torchaudio         pytorch/linux-64::torchaudio-0.13.1-py38_cu117
  torchvision        pytorch/linux-64::torchvision-0.14.1-py38_cu117

As shown the CUDA 11.7 binaries are properly selected.
Also note that the command installs an older 1.12.1 release instead of the current stable 1.13.1 release and 3rd party libs (torchaudio and torchvision) with CUDA 11.3.
I don’t know why conda fails to install the desired version, so could you post the install output showing all dependencies?

1 Like