I installed CUDA 10.1.105 and reinstall NVIDIA driver 430.64 , cuDnn 7.6.5 even Anaconda because of no available
python -m torch.utils.collect_env
Collecting environment information...
PyTorch version: 1.3.1
Is debug build: No
CUDA used to build PyTorch: Could not collect
OS: Ubuntu 18.04.4 LTS
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
CMake version: version 3.10.2
Python version: 3.7
Is CUDA available: No
CUDA runtime version: 10.1.105
GPU models and configuration: GPU 0: GeForce RTX 2080 Ti
Nvidia driver version: 430.64
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn.so.7.6.5
Versions of relevant libraries:
[pip3] numpy==1.19.1
[pip3] torch==1.3.1
[pip3] torchvision==0.4.2
You don’t need local CUDA and cudnn installations, if you are using the conda binary.
Only the NVIDIA driver will be used.
Based on the output you are seeing, I assume you’ve installed the CPU binary.
Could you reinstall PyTorch with the desired CUDA version using the install command from here and post the install log, if it’s still not working?
But again when I run the command python -m torch.utils.collect_env the Pytorch version can be seen. I also tried to install it again using the link:conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch, but the CPU binary is always there.
“Binary” refers to the pip wheel or conda binary in this case.
My recommendation is to uninstall all CPU-only wheels, and reinstall the PyTorch wheel with the desired CUDA runtime.
It seems the error is raised due to a missing kernel in torchvision so make sure you’ve also installed the right torchvision build with all CUDA kernels.