Cuda::is_available() -> True with Python, false with C++

When I call torch::torch::cuda::is_available()from Python, it returns true.
However, when I call it in C++, it returns false.

The docs state that it could be a driver problem.
Any idea how I can find the cause? An issue with a missling library is likely because I am running everything in a Singularity container.

[s2548259@pg-gpu ~]$ nvidia-smi
Thu Jul 18 21:28:10 2019       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.67       Driver Version: 418.67       CUDA Version: 10.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 108...  On   | 00000000:03:00.0 Off |                  N/A |
|  0%   16C    P8     9W / 250W |      0MiB / 11178MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GTX 108...  On   | 00000000:82:00.0 Off |                  N/A |
|  0%   22C    P8     8W / 250W |      0MiB / 11178MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

Do you use the same PyTorch distribution (i.e. libtorch cmake from /usr/local/lib/python3.x/dist-packages/torch/share/cmake or somesuch)?
In the end, the same libtorch should behave the same way…

Best regards

Thomas

@tom I compiled the binary I was executing on a different machine, that did not have the CUDA version of PyTorch installed.

It’s resolved now! Thanks for your help. :slight_smile: