When I install torch==2.6.0 with whl/cu126, none of cuda dependencies get installed. I have cu126 in the environment already. Is this expected?

Background

I’m trying to build pytorch docker image here dstoolkit-devcontainers/src/sample_pytorch_gpu_project/.devcontainer/Dockerfile at main · microsoft/dstoolkit-devcontainers. I was creating a pytorch container with nvidia/cuda as base and then installing pytorch on top of that. I was hoping that pytorch uses already installed cuda dependencies in that way. Recently I learned that this way, pytorch actually doesn’t use available cuda dependencies in the base container but it just installs them as part of pip install torch==2.6.0. So I’ve been investigating how to make pytorch not install cuda dependencies but use already installed ones

Question

As part of this exploration, I realized that pip install torch==2.6.0 actually installs CUDA 12.4 version that mismatches with the base container I use nvidia/cuda:12.6.3-cudnn-runtime-ubuntu24.04.

When I switched to --index-url https://download.pytorch.org/whl/cu126, it stopped installing cuda dependencies and now torch tries to refer to locally preinstalled cuda dependencies so I achieved the goal I mentioned in Background. But I only get this behavior when I choose cu126. When I use cu118 or default (cuda 12.4), it still installs cuda dependencies. Is this inconsistent behavior expected? Does pip install torch process check local cuda version and switch whether to install dependency or not? If this is expected and this is how to use locally installed cuda dependencies, I’m happy with it but wanted to make sure that this is not bug.

Details
Actually I’m currently testing uv as replacement of pip

So this is what I configure in pyproject.toml for pytorch 2.6.0 with cu126

[[tool.uv.index]]
name = "pytorch-cu126"
url = "https://download.pytorch.org/whl/cu126"
explicit = true

[tool.uv.sources]
torch = [
    { index = "pytorch-cu126" },
]
torchvision = [
    { index = "pytorch-cu126" },
]

When I swtich this to cu118 or get rid of these (default, cuda 12.4), I get the follwing with uv lock so I see cuda dependencies only come either with cu118 or cu124 and not with cu126

cu126 to cu124

Resolved 126 packages in 44.21s
Add nvidia-cublas-cu12 v12.4.5.8
Add nvidia-cuda-cupti-cu12 v12.4.127
Add nvidia-cuda-nvrtc-cu12 v12.4.127
Add nvidia-cuda-runtime-cu12 v12.4.127
Add nvidia-cudnn-cu12 v9.1.0.70
Add nvidia-cufft-cu12 v11.2.1.3
Add nvidia-curand-cu12 v10.3.5.147
Add nvidia-cusolver-cu12 v11.6.1.9
Add nvidia-cusparse-cu12 v12.3.1.170
Add nvidia-cusparselt-cu12 v0.6.2
Add nvidia-nccl-cu12 v2.21.5
Add nvidia-nvjitlink-cu12 v12.4.127
Add nvidia-nvtx-cu12 v12.4.127
Update torch v2.6.0+cu126 -> v2.6.0
Update torchvision v0.21.0, v0.21.0+cu126 -> v0.21.0
Add triton v3.2.0

cu126 to cu118

Resolved 124 packages in 5.03s
Add nvidia-cublas-cu11 v11.11.3.6
Add nvidia-cuda-cupti-cu11 v11.8.87
Add nvidia-cuda-nvrtc-cu11 v11.8.89
Add nvidia-cuda-runtime-cu11 v11.8.89
Add nvidia-cudnn-cu11 v9.1.0.70
Add nvidia-cufft-cu11 v10.9.0.58
Add nvidia-curand-cu11 v10.3.0.86
Add nvidia-cusolver-cu11 v11.4.1.48
Add nvidia-cusparse-cu11 v11.7.5.86
Add nvidia-nccl-cu11 v2.21.5
Add nvidia-nvtx-cu11 v11.8.86
Update torch v2.6.0+cu126 -> v2.6.0+cu118
Update torchvision v0.21.0, v0.21.0+cu126 -> v0.21.0+cu118
Add triton v3.2.0

That’s not the case as the PyTorch binaries build with CUDA 12.6 are still shipping with CUDA runtime dependencies and the install log will also show it in the same way as for CUDA 12.4.

If you want to use your locally installed CUDA toolkit you can build PyTorch from source.

1 Like

I have the same issue. When going from torch==2.6.0+cu124 to torch==2.6.0+cu126, the CUDA packages are removed.

Uninstalled 16 packages in 133ms
Installed 2 packages in 125ms
 - nvidia-cublas-cu12==12.4.5.8
 - nvidia-cuda-cupti-cu12==12.4.127
 - nvidia-cuda-nvrtc-cu12==12.4.127
 - nvidia-cuda-runtime-cu12==12.4.127
 - nvidia-cudnn-cu12==9.1.0.70
 - nvidia-cufft-cu12==11.2.1.3
 - nvidia-curand-cu12==10.3.5.147
 - nvidia-cusolver-cu12==11.6.1.9
 - nvidia-cusparse-cu12==12.3.1.170
 - nvidia-cusparselt-cu12==0.6.2
 - nvidia-nccl-cu12==2.21.5
 - nvidia-nvjitlink-cu12==12.4.127
 - nvidia-nvtx-cu12==12.4.127
 - torch==2.6.0+cu124
 + torch==2.6.0+cu126
 - torchvision==0.21.0+cu124
 + torchvision==0.21.0+cu126
 - triton==3.2.0

This is expected since the PyTorch binaries ship with their own CUDA runtime dependencies as already explained.
If you install PyTorch built with CUDA 12.6, the previous CUDA libs should be uninstalled and replaced with the ones from 12.6.

Running pip install torch --index-url https://download.pytorch.org/whl/cu126 shows:

Collecting nvidia-cuda-nvrtc-cu12==12.6.77 (from torch)
  Downloading https://download.pytorch.org/whl/cu126/nvidia_cuda_nvrtc_cu12-12.6.77-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cuda-runtime-cu12==12.6.77 (from torch)
  Downloading https://download.pytorch.org/whl/cu126/nvidia_cuda_runtime_cu12-12.6.77-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cuda-cupti-cu12==12.6.80 (from torch)
  Downloading https://download.pytorch.org/whl/cu126/nvidia_cuda_cupti_cu12-12.6.80-py3-none-manylinux2014_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cudnn-cu12==9.5.1.17 (from torch)
  Downloading https://download.pytorch.org/whl/cu126/nvidia_cudnn_cu12-9.5.1.17-py3-none-manylinux_2_28_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cublas-cu12==12.6.4.1 (from torch)
  Downloading https://download.pytorch.org/whl/cu126/nvidia_cublas_cu12-12.6.4.1-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cufft-cu12==11.3.0.4 (from torch)
  Downloading https://download.pytorch.org/whl/cu126/nvidia_cufft_cu12-11.3.0.4-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-curand-cu12==10.3.7.77 (from torch)
  Downloading https://download.pytorch.org/whl/cu126/nvidia_curand_cu12-10.3.7.77-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cusolver-cu12==11.7.1.2 (from torch)
  Downloading https://download.pytorch.org/whl/cu126/nvidia_cusolver_cu12-11.7.1.2-py3-none-manylinux2014_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cusparse-cu12==12.5.4.2 (from torch)
  Downloading https://download.pytorch.org/whl/cu126/nvidia_cusparse_cu12-12.5.4.2-py3-none-manylinux2014_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cusparselt-cu12==0.6.3 (from torch)
  Downloading https://download.pytorch.org/whl/cu126/nvidia_cusparselt_cu12-0.6.3-py3-none-manylinux2014_x86_64.whl.metadata (6.8 kB)
Requirement already satisfied: nvidia-nccl-cu12==2.21.5 in /usr/local/lib/python3.12/dist-packages (from torch) (2.21.5)
Collecting nvidia-nvtx-cu12==12.6.77 (from torch)
  Downloading https://download.pytorch.org/whl/cu126/nvidia_nvtx_cu12-12.6.77-py3-none-manylinux2014_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-nvjitlink-cu12==12.6.85 (from torch)
  Downloading https://download.pytorch.org/whl/cu126/nvidia_nvjitlink_cu12-12.6.85-py3-none-manylinux2010_x86_64.manylinux_2_12_x86_64.whl.metadata (1.5 kB)
Requirement already satisfied: triton==3.2.0 in /usr/local/lib/python3.12/dist-packages (from torch) (3.2.0)

which is correct since the command explicitly specifies to install PyTorch binaries with CUDA 12.6 runtime dependencies.

This indeed works, but when using uv as OP also did it does not install the CUDA dependencies.

I’m not familiar with uv, sorry.
CC @malfet in case you’ve seen some unexpected behavior.