torch.cuda.DeferredCudaCallError: CUDA call failed lazily at initialization with error: device >= 0 && device < num_gpus

This is thrown whenever I run anything without specifying CUDA_VISIBLE_DEVICES.
Full stack trace:

>>> torch.cuda.device()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: __init__() missing 1 required positional argument: 'device'
>>> torch.cuda.current_device()
Traceback (most recent call last):
  File "/nobackup/wenxuan/miniconda3/envs/pytorch3d/lib/python3.9/site-packages/torch/cuda/__init__.py", line 260, in _lazy_init
    queued_call()
  File "/nobackup/wenxuan/miniconda3/envs/pytorch3d/lib/python3.9/site-packages/torch/cuda/__init__.py", line 145, in _check_capability
    capability = get_device_capability(d)
  File "/nobackup/wenxuan/miniconda3/envs/pytorch3d/lib/python3.9/site-packages/torch/cuda/__init__.py", line 381, in get_device_capability
    prop = get_device_properties(device)
  File "/nobackup/wenxuan/miniconda3/envs/pytorch3d/lib/python3.9/site-packages/torch/cuda/__init__.py", line 399, in get_device_properties
    return _get_device_properties(device)  # type: ignore[name-defined]
RuntimeError: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/cuda/CUDAContext.cpp":50, please report a bug to PyTorch. 

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/nobackup/wenxuan/miniconda3/envs/pytorch3d/lib/python3.9/site-packages/torch/cuda/__init__.py", line 674, in current_device
    _lazy_init()
  File "/nobackup/wenxuan/miniconda3/envs/pytorch3d/lib/python3.9/site-packages/torch/cuda/__init__.py", line 264, in _lazy_init
    raise DeferredCudaCallError(msg) from e
torch.cuda.DeferredCudaCallError: CUDA call failed lazily at initialization with error: device >= 0 && device < num_gpus INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1682343964576/work/aten/src/ATen/cuda/CUDAContext.cpp":50, please report a bug to PyTorch. 

CUDA call was originally invoked at:

['  File "<stdin>", line 1, in <module>\n', '  File "<frozen importlib._bootstrap>", line 1007, in _find_and_load\n', '  File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked\n', '  File "<frozen importlib._bootstrap>", line 680, in _load_unlocked\n', '  File "<frozen importlib._bootstrap_external>", line 850, in exec_module\n', '  File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed\n', '  File "/nobackup/wenxuan/miniconda3/envs/pytorch3d/lib/python3.9/site-packages/torch/__init__.py", line 1146, in <module>\n    _C._initExtension(manager_path())\n', '  File "<frozen importlib._bootstrap>", line 1007, in _find_and_load\n', '  File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked\n', '  File "<frozen importlib._bootstrap>", line 680, in _load_unlocked\n', '  File "<frozen importlib._bootstrap_external>", line 850, in exec_module\n', '  File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed\n', '  File "/nobackup/wenxuan/miniconda3/envs/pytorch3d/lib/python3.9/site-packages/torch/cuda/__init__.py", line 197, in <module>\n    _lazy_call(_check_capability)\n', '  File "/nobackup/wenxuan/miniconda3/envs/pytorch3d/lib/python3.9/site-packages/torch/cuda/__init__.py", line 195, in _lazy_call\n    _queued_calls.append((callable, traceback.format_stack()))\n']

Environment:

Your GPU5 is in an error state so try to fix it before digging into PyTorch.

1 Like

Is your problem solved? I ran into the same problem and can’t solve it now

1 Like

I also encountered this problem, how do you solve it?

I don’t know what’s causing the issue but I would check dmesg for any Xids, reboot the workstation, and reinstall the driver.