Hello,
I am trying to configure Pytorch with CUDA support. However, torch.cuda.is_available() keeps on returning false.
Checking nvidia-smi, I am using CUDA 10.0. I modified my bash_profile to set a path to CUDA.
export PATH=/usr/local/cuda-10.0/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64\{LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
When attempting to use CUDA, I received this error.
THCudaCheck FAIL file=/pytorch/aten/src/THC/THCGeneral.cpp line=50 error=30 : unknown error
Is there some solution?
LeviViana
(Levi Viana)
2
You can always try to set the environment variable CUDA_HOME
. By the way, one easy way to check if torch is pointing to the right path is
from torch.utils.cpp_extension import CUDA_HOME
print(CUDA_HOME) # by default it is set to /usr/local/cuda/
Interestingly, I got no CUDA runtime found despite assigning it the CUDA path. nvcc did verify the CUDA version.
LeviViana
(Levi Viana)
4
That’s weird.
CUDA_HOME=a/b/c python -c "from torch.utils.cpp_extension import CUDA_HOME; print(CUDA_HOME)"
This prints a/b/c
for me, showing that torch has correctly set the CUDA_HOME env variable to the value assigned.
It detected the path, but it said it can’t find a cuda runtime.
No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda-10.0'
samm
6
excuse me did you solve it ?
Did you find the solution to this issue? I have the same problem. Thanks