RuntimeError: cuda runtime error (38)

When I tried to call torch.cuda.device_count() or any other torch.cuda functions, the following error arises:

RuntimeError: cuda runtime error (38) : no CUDA-capable device is detected at /pytorch/torch/lib/THC/THCGeneral.c:70

I installed CUDA 8.0 and cuDNN in my ubuntu14.04 before installing pytorch. Checking the installation of CUDA 8.0 gives the right correspondance

CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: “GeForce GTX 1050”
** CUDA Driver Version / Runtime Version 9.0 / 8.0**
** CUDA Capability Major/Minor version number: 6.1**
** Total amount of global memory: 1991 MBytes (2087714816 bytes)**
** ( 5) Multiprocessors, (128) CUDA Cores/MP: 640 CUDA Cores**
** GPU Max Clock rate: 1506 MHz (1.51 GHz)**
** Memory Clock rate: 3504 Mhz**
** Memory Bus Width: 128-bit**
** L2 Cache Size: 1048576 bytes**
** Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)**
** Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers**
** Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers**
** Total amount of constant memory: 65536 bytes**
** Total amount of shared memory per block: 49152 bytes**
** Total number of registers available per block: 65536**
** Warp size: 32**
** Maximum number of threads per multiprocessor: 2048**
** Maximum number of threads per block: 1024**
** Max dimension size of a thread block (x,y,z): (1024, 1024, 64)**
** Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)**
** Maximum memory pitch: 2147483647 bytes**
** Texture alignment: 512 bytes**
** Concurrent copy and kernel execution: Yes with 2 copy engine(s)**
** Run time limit on kernels: Yes**
** Integrated GPU sharing Host Memory: No**
** Support host page-locked memory mapping: Yes**
** Alignment requirement for Surfaces: Yes**
** Device has ECC support: Disabled**
** Device supports Unified Addressing (UVA): Yes**
** Device PCI Domain ID / Bus ID / location ID: 0 / 101 / 0**
** Compute Mode:**
** < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >**

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.0, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = GeForce GTX 1050
Result = PASS

Then I pip installed pytorch in Python3.6 by following the official website instructions. In python, ‘import torch’ works well, but calling any torch.cuda function gives the above runtime error.

Could anyone figure out what might be wrong in my installation?

Thank you!

What’s the output of your nvidia-smi?

I attached the output. There is an ERR! in the Pwr:Usage. Is it problematic?

±----------------------------------------------------------------------------+
| NVIDIA-SMI 384.111 Driver Version: 384.111 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 1050 Off | 00000000:65:00.0 On | N/A |
| 0% 52C P0 ERR! / 120W | 105MiB / 1991MiB | 0% Default |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1430 G /usr/bin/X 103MiB |
±----------------------------------------------------------------------------+

Problem solved.
I made a very stupid mistake.
There is a line in the head which is
os.environ["CUDA_VISIBLE_DEVICES"] = '3'.
I did not notice it first time I ran the program and got another error. I revised it to os.environ["CUDA_VISIBLE_DEVICES"] = '0' without restarting the kernel. Then l got this error.

Restart the program with os.environ["CUDA_VISIBLE_DEVICES"] = '0'can solve the problem.