I am trying to configure Pytorch with CUDA support. However, torch.cuda.is_available() keeps on returning false.
Checking nvidia-smi, I am using CUDA 10.0. I modified my bash_profile to set a path to CUDA.
When attempting to use CUDA, I received this error.
THCudaCheck FAIL file=/pytorch/aten/src/THC/THCGeneral.cpp line=50 error=30 : unknown error
Is there some solution?
You can always try to set the environment variable
CUDA_HOME. By the way, one easy way to check if torch is pointing to the right path is
from torch.utils.cpp_extension import CUDA_HOME
print(CUDA_HOME) # by default it is set to /usr/local/cuda/
Interestingly, I got no CUDA runtime found despite assigning it the CUDA path. nvcc did verify the CUDA version.
CUDA_HOME=a/b/c python -c "from torch.utils.cpp_extension import CUDA_HOME; print(CUDA_HOME)"
a/b/c for me, showing that torch has correctly set the CUDA_HOME env variable to the value assigned.
It detected the path, but it said it can’t find a cuda runtime.
No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda-10.0'
excuse me did you solve it ?