CUDA driver version is insufficient for CUDA runtime version at

Hello,

I am new in pytorch&CUDA stack in ML and at this moment stuck on issue with error message above.

Most likely the issue in uncompatibility of CUDA and NVIDIA driver as an error message says. I am working on design and train model within educational notebook so it correctness should be verified by creator. I started to change versions of CUDA when was working on issue with OOM on GPU when train my models.

The advised steps were to meet CUDA and NVIDIA driver versions to the compatibility baseline https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#cuda-major-component-versions

My current environment Windows 7 x64, python 3.6, CUDA v7.0 (the final version that still supports Windows 7) and driver that comes with CUDA installer (346.xx) and the latest one installed later (425.31)

The issue still exist and I’m reached the point where it is good to ask community for help.

Do you have any proposals to overcome this issue?

Thanks!

PyTorch needs CUDA>=9.2 so you won’t be able to compile it with CUDA7.0.
That being said, if you are installing the conda binaries or pip wheels, the CUDA runtime (and cudnn etc.) will be installed so that your workstation would only need the appropriate NVIDIA driver as described in Table2 in your link.

Thank you for valuable information. I’m still on the way of resolving dependencies conflicts.

In the separate thread I saw an engineer concerned his graphic card doesn’t meet with the latest CUDA driver, you answered him it is not true for his case, but it brought me to the question where is such list between card model and CUDA support. Does it mean if the card supports CUDA it still might not fit with the latest CUDA driver (back compatibility is broken)?

p.s.
my card is nvidia gtx 860M (with cuda support)

Your GTX 860M would have a compute capability of 5.0 and would thus work with the PyTorch binaries.
Make sure you have the appropriate NVIDIA driver for the CUDA version you are selecting from the installation page. You can find the required NVIDIA driver for each CUDA version here in Table1.

Note that you don’t need a local CUDA toolkit installation, as the conda binaries and pip wheels will ship with their CUDA (cudnn, NCCL, etc.) runtimes.
You would need the local CUDA toolkit, if you are planning to build PyTorch from source or compile any custom CUDA extensions or 3rd party libs.

1 Like

Thank you, I fixed dependency issue.

Note that you don’t need a local CUDA toolkit installation, as the conda binaries and pip wheels will ship with their CUDA (cudnn, NCCL, etc.) runtimes.

This sounds awesome! I had to install CUDA before, when running TF code, but such an install is way more convenient using PyTorch!

1 Like