i found an nvidia compatibility matrix, but that didnt work. testing with 2 PC’s with 2 different GPU’s and have updated to what is documented, at least i think so. not sure what to do now. i have been trying for a week.
NVIDIA-SMI 522.06 Driver Version: 522.06 CUDA Version: 11.8
(base) C:\Users\rossroxas>python -m torch.utils.collect_env
Collecting environment information…
PyTorch version: 2.0.0+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Enterprise
GCC version: (x86_64-posix-seh, Built by strawberryperl.com project) 8.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.16 (main, Mar 8 2023, 10:39:24) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: False
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA RTX A5500
GPU 1: NVIDIA RTX A5500
GPU 2: NVIDIA RTX A5500
Nvidia driver version: 522.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
pip install torch
nvidia for the CUDA graphics driver and cudnn. I had the impression that everything was included and maybe distributed so that i can check the GPU after the graphics driver install.
i think one of the confusing things is finding the matrix on git i found doesnt really give straight forward line up of which versions are compatible with cuda and cudnn
I don’t understand which “matrix on git” you are referring to as you can just select the desired PyTorch release and CUDA version in my previously posted link.
As also mentioned your locally installed CUDA toolkit won’t be used unless you build PyTorch from source or a custom CUDA extension since the binaries ship with their own dependencies.
Again, your locally installed CUDA toolkit won’t be used, only the NVIDIA driver.