Cuda and Torch install via pip vs. conda

I am trying to make my repository as reproducible as possible, and after a discussion with ChatGPT I thought it would be a good idea to use conda with an environment.yml file instead of a requirements.txt. However, it does not work. Now, I would like to know what the best state-of-the-art way is to go.

From my discussion with ChatGPT my understanding about pip vs conda was as follows: Pip is only a Python package manager. Libraries like cuda contains binaries and create compatibility constraints based on other specs like CUDA drivers and the operating system. These specs are ignored when I use pip, to make it work also have to specify a wheel. Conda, however, also checks this extended set of compatibility constraints, can figure out the dependencies and finds an appropriate wheel for you.

In my concrete case I have to install torch, torch-geometric and torch-scatter. Using conda this results in a CPU wheel or a conflict during the env creation, depending on where you set fixed versions in the .env file. Apparently, conda can not find a suitable solution, even though there is one that is working. When I started my project, I found a way with pip to combine all packages using trial and error. I paste the requirements below.
**
Now I would like to know:

  1. Was my understanding about pip/conda wrong?
  2. What is the cleanest solution?**

The requirements (working with pip and a small trick) :
torch 2.6.0+cu124
torch_cluster 1.6.3+pt26cu124
torch-geometric 2.7.0
torch_scatter 2.1.2+pt26cu124
torch_sparse 0.6.18+pt26cu124
torch_spline_conv 1.2.2+pt26cu124
torchaudio 2.6.0
torchvision 0.21.0
nvidia-cublas-cu12 12.4.5.8
nvidia-cuda-cupti-cu12 12.4.127
nvidia-cuda-nvrtc-cu12 12.4.127
nvidia-cuda-runtime-cu12 12.4.127
nvidia-cudnn-cu12 9.1.0.70
nvidia-cufft-cu12 11.2.1.3
nvidia-curand-cu12 10.3.5.147
nvidia-cusolver-cu12 11.6.1.9
nvidia-cusparse-cu12 12.3.1.170
nvidia-cusparselt-cu12 0.6.2
nvidia-nccl-cu12 2.21.5
nvidia-nvjitlink-cu12 12.4.127
nvidia-nvtx-cu12 12.4.127

I think your understanding about pip and conda is correct. conda has the concept of “channels” that tell it where to find a package. Sometimes enabling the conda-forge channel is helpful when the default channel is insufficient. (for example: conda install scipy --channel conda-forge).

That being said; have you considered using docker? There are docker images that have CUDA and PyTorch already set up PyTorch | NVIDIA NGC . You can just customize these images to install the python packages that are not already installed.

Thank you for your response.

I have already tried using these channels:
channels:

  • nvidia
  • pytorch
  • conda-forge
  • defaults

I also considered using Docker, but it seems that I will have to solve the same problem there, no? I think the torch-geometric and torch-scatter libraries are the problem here. I haven’t found any image covering these two.

This doesn’t have the exact versions you are looking for but it seems to work.

FROM nvcr.io/nvidia/pytorch:25.03-py3
RUN pip install torch_geometric
RUN pip install pyg_lib torch_scatter torch_sparse torch_cluster torch_spline_conv -f https://data.pyg.org/whl/torch-2.7.0+cu128.html

You can build this Dockerfile using docker build --tag kasus8 . and run it as follows:

docker run --rm --gpus all -it kasus8

I was able to import torch, torch_geometric, and torch_scatter with this setup without any errors.

It seems like there are dedicated docker images for PyG that might be relevant: PyG | NVIDIA NGC . You can just install torch_scatter in those containers using

FROM nvcr.io/nvidia/pyg:25.03-py3
RUN pip install pyg_lib torch_scatter torch_sparse torch_cluster torch_spline_conv -f https://data.pyg.org/whl/torch-2.7.0+cu128.html

Release Notes for the PyG 25.03 image: PyG Release 25.03 - NVIDIA Docs

Maybe relevant: [Announcement] Deprecating PyTorch’s official Anaconda channel

So I’m not sure if you want to consider conda, at least to install PyTorch (I still use conda to create and manage virtual environments).