RuntimeError: cuda runtime error (38) : no CUDA-capable device is detected at /opt/conda/conda-bld/pytorch_1565272271120/work/aten/src/THC/THCGeneral.cpp:50

Hello Everyone,

I have encountered the error:

RuntimeError: cuda runtime error (38) : no CUDA-capable device is detected at /opt/conda/conda-bld/pytorch_1565272271120/work/aten/src/THC/THCGeneral.cpp:50

I tried many things I have seen on the internet but I could not solve my problem.
I am working on a high-performance cluster and here its attributes.

nvidia-smi

±----------------------------------------------------------------------------+
| NVIDIA-SMI 418.67 Driver Version: 418.67 CUDA Version: 10.1 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Quadro M2000 Off | 00000000:08:00.0 Off | N/A |
| 56% 41C P0 23W / 75W | 41MiB / 4043MiB | 0% Default |
±------------------------------±---------------------±---------------------+
| 1 Tesla P4 Off | 00000000:88:00.0 Off | 0 |
| N/A 50C P0 24W / 75W | 7346MiB / 7611MiB | 0% Default |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 1 43048 C julia 109MiB |
±----------------------------------------------------------------------------+

print(torch.version)
1.2.0
print(torch.cuda.device_count())
2
print(torch.cuda.is_available())
True
print(torch.cuda.current_device())
0
print(torch.cuda.get_device_name(0))
Tesla P4

Could you update to the latest stable PyTorch release and retry the code?
The error message seems fishy, as apparently PyTorch detects the device in the posted script.