When I call torch::torch::cuda::is_available()
from Python, it returns true.
However, when I call it in C++, it returns false.
The docs state that it could be a driver problem.
Any idea how I can find the cause? An issue with a missling library is likely because I am running everything in a Singularity container.
[s2548259@pg-gpu ~]$ nvidia-smi
Thu Jul 18 21:28:10 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.67 Driver Version: 418.67 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 108... On | 00000000:03:00.0 Off | N/A |
| 0% 16C P8 9W / 250W | 0MiB / 11178MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce GTX 108... On | 00000000:82:00.0 Off | N/A |
| 0% 22C P8 8W / 250W | 0MiB / 11178MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+