I’m trying to build pytorch docker image here dstoolkit-devcontainers/src/sample_pytorch_gpu_project/.devcontainer/Dockerfile at main · microsoft/dstoolkit-devcontainers. I was creating a pytorch container with nvidia/cuda as base and then installing pytorch on top of that. I was hoping that pytorch uses already installed cuda dependencies in that way. Recently I learned that this way, pytorch actually doesn’t use available cuda dependencies in the base container but it just installs them as part of pip install torch==2.6.0. So I’ve been investigating how to make pytorch not install cuda dependencies but use already installed ones
Question
As part of this exploration, I realized that pip install torch==2.6.0 actually installs CUDA 12.4 version that mismatches with the base container I use nvidia/cuda:12.6.3-cudnn-runtime-ubuntu24.04.
When I switched to --index-url https://download.pytorch.org/whl/cu126, it stopped installing cuda dependencies and now torch tries to refer to locally preinstalled cuda dependencies so I achieved the goal I mentioned in Background. But I only get this behavior when I choose cu126. When I use cu118 or default (cuda 12.4), it still installs cuda dependencies. Is this inconsistent behavior expected? Does pip install torch process check local cuda version and switch whether to install dependency or not? If this is expected and this is how to use locally installed cuda dependencies, I’m happy with it but wanted to make sure that this is not bug.
Details
Actually I’m currently testing uv as replacement of pip
So this is what I configure in pyproject.toml for pytorch 2.6.0 with cu126
[[tool.uv.index]]
name = "pytorch-cu126"
url = "https://download.pytorch.org/whl/cu126"
explicit = true
[tool.uv.sources]
torch = [
{ index = "pytorch-cu126" },
]
torchvision = [
{ index = "pytorch-cu126" },
]
When I swtich this to cu118 or get rid of these (default, cuda 12.4), I get the follwing with uv lock so I see cuda dependencies only come either with cu118 or cu124 and not with cu126
That’s not the case as the PyTorch binaries build with CUDA 12.6 are still shipping with CUDA runtime dependencies and the install log will also show it in the same way as for CUDA 12.4.
If you want to use your locally installed CUDA toolkit you can build PyTorch from source.
This is expected since the PyTorch binaries ship with their own CUDA runtime dependencies as already explained.
If you install PyTorch built with CUDA 12.6, the previous CUDA libs should be uninstalled and replaced with the ones from 12.6.