My pytorch library doesn't detect cuda

I have installed cuda (11.8 and 12.0)
I have installed pytorch 2.2.1 for cuda 11.8
I also created qt .pro file and declared paths to cuda and pytorch
I also added cuda library paths to .bashrc
When i run my application:
#include
#include “torch/torch.h”
#include “torch/jit.h”
#include “torch/nn.h”
#include “torch/script.h”

using namespace std;

int main()
{
std::cout << “Cuda Device Count:” << torch::cuda::device_count() << std::endl;
std::cout << “cudnn_is_available:” << torch::cuda::cudnn_is_available() << std::endl;
std::cout << “cuda::is_available:” << torch::cuda::is_available() << std::endl;
std::cout << “cuda::show_config:” << torch::show_config().c_str() << std::endl;
return 0;
}

I’m watching:
Cuda Device Count:0
cudnn_is_available:0
cuda::is_available:0
cuda::show_config:PyTorch built with:

  • GCC 9.4
  • C++ Version: 201703
  • Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
  • Intel(R) MKL-DNN v3.3.2 (Git Hash 2dc95a2ad0841e29db8b22fbccaf3e5da7992b01)
  • OpenMP 201511 (a.k.a. OpenMP 4.5)
  • LAPACK is enabled (usually provided by MKL)
  • NNPACK is enabled
  • CPU capability usage: AVX2
  • Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=12.1, CUDNN_VERSION=8.9.2, CXX_COMPILER=/usr/bin/c++, CXX_FLAGS=-Wno-deprecated-declarations -D_GLIBCXX_USE_CXX11_ABI=1 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=2.2.1, USE

Where i’m wrong ?

You might have installed (or built) the CPU-only binary as _show_config() should return the used CUDA runtime, cuDNN etc.:

print(torch._C._show_config())
PyTorch built with:
  - GCC 9.3
  - C++ Version: 201703
  - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v3.3.2 (Git Hash 2dc95a2ad0841e29db8b22fbccaf3e5da7992b01)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX2
  - CUDA Runtime 12.1
  - NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90
  - CuDNN 8.9.2
  - Magma 2.6.1
...

I have downloaded (from official website) pytorch 2.2.1 with support of CUDA 11.8 but output of application shows CUDA 12.1(as shown in my output) May be it doesn’t matter, but i have no idea.

currently, i have installed cuda 12.1 and changed my environment.
I also downloaded pytorch 2.2.1 with support of cuda 12.1
I have next environment:
declare -x CUDA_HOME=“/usr/local/cuda-12.1”
declare -x LD_LIBRARY_PATH=“:/usr/local/cuda-12.1/lib64:/opt/PYTORCH/libtorch22/libtorch/lib/”
declare -x PATH=“/home/user/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/snap/bin:/usr/local/cuda-12.1/bin”

But troubles is not solved.
Does it mean that download link of official website is wrong ?

No, it shouldn’t be wrong. Were you able to use libtorch on your GPU before or PyTorch (via Python)?

No, I only used my GPU for tensorflow
user@usercomp:~$ nvidia-smi
Thu Feb 29 07:30:35 2024
±--------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.161.07 Driver Version: 535.161.07 CUDA Version: 12.2 |
|-----------------------------------------±---------------------±---------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 3060 … On | 00000000:01:00.0 Off | N/A |
| N/A 37C P0 N/A / 80W | 8MiB / 6144MiB | 0% Default |
| | | N/A |
±----------------------------------------±---------------------±---------------------+

±--------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 2577 G /usr/lib/xorg/Xorg 4MiB |
±--------------------------------------------------------------------------------------+

nvidia-smi shows cuda version 12.2, but i have installed only 12.1
How can i solve my problem ? I have no idea.

I solved my problem. I had to use pytorch with another ABI (pre cxx11)
How can i change ABI for conda/pip python installation ?

You cannot change it as the binaries are built without CXX11 ABI and you would need to build it from source.