No CUDA-capable device is detected

When I run

import torch
torch.cuda.FloatTensor([1.])

I seem to be getting the error: RuntimeError: cuda runtime error (38) : no CUDA-capable device is detected at /pytorch/aten/src/THC/THCGeneral.cpp:50.

I’m using Debian (Stretch). Checking the nvidia driveron the console:

/sbin/modinfo nvidia_current

returns:

filename:       /lib/modules/4.9.0-11-amd64/updates/dkms/nvidia-current.ko
alias:          char-major-195-*
version:        418.74
supported:      external
license:        NVIDIA
srcversion:     AB4044DE27C9CA55579A110

when I do nvcc --version, I get:

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Tue_Jun_12_23:07:04_CDT_2018
Cuda compilation tools, release 9.2, V9.2.148

On python, when I do

import torch
print(torch.version.cuda)  
print(torch.cuda.device_count())
print(torch.cuda.is_available())

I get:

9.2.148
0
False

I’ve tried doing os.environ["CUDA_VISIBLE_DEVICES"] = '0, but it still doesn’t work.

Any help would be much appreciated!

Did you update the NVIDIA drivers or CUDA recently? If so, I would recommend to restart the machine, as we’ve seen some similar errors in the past.

1 Like