UserWarning: CUDA initialization: The NVIDIA driver on your system is too old (found version 10010)

Hi Everyone,

I’m having trouble with this error. The error message is not quite clear to me, in one part it says NVIDIA driver, and in the other part it says CUDA driver, and each time I am not sure what they are referring to as it is too generic.

“The NVIDIA driver on your system is too old (found version 10010)”

How do I interpret found version 10010? Is this the GPU driver (which has version 25.21.14.2531),
or is it the version of CUDA (10.2.89), or something else?

“go to: https://pytorch.org to install a PyTorch version that has been co
mpiled with your version of the CUDA driver.”

Is this PyTorch version compatible with CUDA 10.2.89 or with whatever is version 10010?

Environment Details:
Python

  • python --version
  • 3.9.7

Pytorch

  • print(torch.__version__)

  • 1.10.1
    Installed with:

  • conda installpytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch

CUDA - installed

  • nvcc --version
  • nvcc: NVIDIA (R) Cuda compiler driver
  • Copyright (c) 2005-2019 NVIDIA Corporation
  • Built on Wed_Oct_23_19:32:27_Pacific_Daylight_Time_2019
  • Cuda compilation tools, release 10.2, V10.2.89

GPU Card

  • NVIDIA GeForce GTX 670MX
  • Driver Version: 25.21.14.2531
  • Driver Date: 9/04/2019

OS

  • Windows 8.1 SP 3
  • Version 6.3.9600

Where the error is occurring:

Python 3.9.7 (default, Sep 16 2021, 16:59:28) [MSC v.1916 64 bit (AMD64)] :: Ana
conda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
C:\anaconda3\lib\site-packages\torch\cuda\__init__.py:80
: UserWarning: CUDA initialization: The NVIDIA driver on your system is too old
(found version 10010). Please update your GPU driver by downloading and installi
ng a new version from the URL: http://www.nvidia.com/Download/index.aspx Alterna
tively, go to: https://pytorch.org to install a PyTorch version that has been co
mpiled with your version of the CUDA driver. (Triggered internally at  ..\c10\cu
da\CUDAFunctions.cpp:112.)
  return torch._C._cuda_getDeviceCount() > 0
False

I used the table on Start Locally | PyTorch to obtain the installation command as follows:

conda installpytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch

Thank you

Could you check the driver version via nvidia-smi, as 10010 would indicate 10.1 while your nvcc compiler is 10.2.

Hi @ptrblck

Thanks for replying to my topic!

I have checked this and it does indeed say 10.1. I am guessing nvcc and nvidia-smi both need to either say 10.1 or 10.2 in order for my environment to be set up correctly.

Fri Jan 14 08:30:03 2022       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 425.31       Driver Version: 425.31       CUDA Version: 10.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name            TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 670MX  WDDM  | 00000000:01:00.0 N/A |                  N/A |
| N/A   36C    P8    N/A /  N/A |    314MiB /  3072MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0                    Not Supported                                       |
+-----------------------------------------------------------------------------+

Not necessarily, as you could install the NVIDIA driver and the compiler separately (as is apparently the case in your setup).
To solve the initial error you would have to update the driver to a newer version.

Thanks very much for that. It did not seem like I would have much luck updating my NVIDIA drivers as I already had the latest release for my card, so instead I have used the option to downgrade pytorch.

Seems the latest that I can get for Cuda 10.1 is pytorch 1.8.1, according to Previous PyTorch Versions | PyTorch

So I have now run

pip install torch==1.8.1+cu101 torchvision==0.9.1+cu101 torchaudio==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html

Making progress now

>>> import torch
>>> torch.cuda.is_available()
False

At least it’s not giving me the error this time!

I just saw that you are using a 670MX, which has a compute capability of 3.0 and is not supported anymore (the lowest compute capability supported in the binaries would be 3.5).

Thanks, where would I see the old supported binaries for 3.0?

I don’t know which binaries shipped with this older compute capability, so you would have to install some older releases and check if they would be working. As a starter I would probably use 1.0 and then try to bisect the releases. Alternatively, you could try to build from source, but I’m also unsure if some native kernels use methods introduced for later compute capabilities.

Hi! I have had the exactly the same problem, with the following hard/software properties:
Version: 0.10.251 64 bit http://cuda-z.sf.net/
OS Version: Windows x86 6.2.9200
Driver Version: 425.91
Driver Dll Version: 10.10 (25.21.14.2591)
Runtime Dll Version: 6.50

Core Information

Name: Quadro K2100M
Compute Capability: 3.0
Clock Rate: 666.5 MHz
PCI Location: 0:1:0
Multiprocessors: 3 (576 Cores)
Threads Per Multiproc.: 2048
Warp Size: 32
Regs Per Block: 65536
Threads Per Block: 1024
Threads Dimensions: 1024 x 1024 x 64
Grid Dimensions: 2147483647 x 65535 x 65535
Watchdog Enabled: Yes
Integrated GPU: No
Concurrent Kernels: Yes
Compute Mode: Default
Stream Priorities: No


(as reported by cuda-z).

What I did, was the following: I uninstalled all previous attempts to install CUDA for pytorch -and - simply copied your ‘pip install’ string in windows shell. After the installation, I again copied

>>> import torch
>>> torch.cuda.is_available()

and - obtained True!

What is then the difference between you case and mine?

My joy was premature. At the very first attempt to really use CUDA (which is ‘available’, according to PyTorch output), I am getting:

UserWarning: 
Found GPU0 Quadro K2100M which is of cuda capability 3.0.
PyTorch no longer supports this GPU because it is too old.
The minimum cuda capability that we support is 3.5.

and:

RuntimeError: CUDA error: no kernel image is available for execution on the device.

Is there any possibility, after all, to use NVIDIA Quadro K2100M as CUDA enhancer for PyTorch?

Thanks for your reply, what I am trying now is going back to whatever was the state of the art in 2015. So I am trying Python 3.5 and pytorch 1.0. Will post here how it goes.