No version satisfies the requirement torch==1.1x.x+cu111

I have 2 RTX 3090 GPUs and nvidia-smi outputs the CUDA version is 11.2. I tried to follow the instruction in “Start Locally | PyTorch” but none of this versions returns True for torch.cuda.is_available().

When I want to further locate my problem, I get No CUDA GPUs are available

>>> import torch
>>> torch.zeros(1).cuda()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/<username>/miniconda3/envs/<envname>/lib/python3.10/site-packages/torch/cuda/__init__.py", line 217, in _lazy_init
    torch._C._cuda_init()
RuntimeError: No CUDA GPUs are available

I find a similar problem on StackOverflow: Pytorch says that CUDA is not available, it says I probably should install a PyTorch with a CUDA version below 11.2. I find the following command in Previous PyTorch Versions

# CUDA 11.1
pip install torch==1.10.1+cu111 torchvision==0.11.2+cu111 torchaudio==0.10.1 -f https://download.pytorch.org/whl/torch_stable.html

But it gives ERROR: Could not find a version that satisfies the requirement:

Looking in links: https://download.pytorch.org/whl/torch_stable.html
ERROR: Could not find a version that satisfies the requirement torch==1.10.1+cu111 (from versions: 1.11.0, 1.11.0+cpu, 1.11.0+cu102, 1.11.0+cu113, 1.11.0+cu115, 1.11.0+rocm4.3.1, 1.11.0+rocm4.5.2, 1.12.0, 1.12.0+cpu, 1.12.0+cu102, 1.12.0+cu113, 1.12.0+cu116, 1.12.0+rocm5.0, 1.12.0+rocm5.1.1)
ERROR: No matching distribution found for torch==1.10.1+cu111

How to resolve this issue? Any suggestions would be much appreciated.

Install the binaries with the CUDA11.3 or 11.6 runtime and check if you are still hitting the issue.
If so, reinstall your NVIDIA drivers, reboot the node, and rerun your test.

1 Like

Later I found out this is caused by I accidentally run

export CUDA_VISIBLE_DEVICES=

Can you help me with this?