AssertionError: Torch not compiled with CUDA enabled, torch.cuda.is_available()= False

Hello everyone, I hope all are doing well
I am writing this topic after trying all possible solutions for my issue.

I am trying to run my deep-learning model (building based on PyTorch) on the Jupyter notebook, however, I faced this error: AssertionError: Torch not compiled with CUDA enabled

I have installed Cuda toolkit 10.2 and cuDNN v8.7.0

Additional info:

(gputorch) C:\Users\dell>python -m torch.utils.collect_env
Collecting environment information...
PyTorch version: 1.13.1
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A

OS: ??Microsoft Windows 10 Pro
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A

Python version: 3.9.13 (main, Aug 25 2022, 23:51:50) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19044-SP0
Is CUDA available: False
CUDA runtime version: 10.2.89
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: GeForce MX130
Nvidia driver version: 441.22
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2\bin\cudnn_ops_train64_8.dll
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.5
[pip3] numpydoc==1.4.0
[pip3] torch==1.13.1
[pip3] torchaudio==0.13.1+cu116
[pip3] torchvision==0.14.1
[conda] blas                      1.0                         mkl
[conda] cudatoolkit               10.2.89              h74a9793_1
[conda] mkl                       2021.4.0           haa95532_640
[conda] mkl-service               2.4.0           py310h2bbff1b_0
[conda] mkl_fft                   1.3.1           py310ha0764ea_0
[conda] mkl_random                1.2.2           py310h4ed8f06_0
[conda] numpy                     1.21.5          py310h6d2d95c_3
[conda] numpy-base                1.21.5          py310h206c741_3
[conda] numpydoc                  1.5.0           py310haa95532_0
[conda] pytorch                   1.12.1             py3.10_cpu_0    pytorch
[conda] pytorch-mutex             1.0                         cpu    pytorch
[conda] torchaudio                0.12.1                py310_cpu    pytorch
[conda] torchvision               0.13.1                py310_cpu    pytorch
(gputorch) C:\Users\dell>nvidia-smi
Sun Jan 22 18:35:22 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 441.22       Driver Version: 441.22       CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name            TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce MX130      WDDM  | 00000000:01:00.0 Off |                  N/A |
| N/A   42C    P8    N/A /  N/A |     40MiB /  4096MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+```

Note that your local CUDA toolkit (and cuDNN) won’t be used unless you build from source or a custom CUDA extension when you are installing the binaries as these ship with their own CUDA runtime, cuDNN, NCCL. etc.
Also, CUDA 10.2 is officially not supported anymore (and source builds might start breaking soon or might already be broken).

Install the current stable or nightly PyTorch pip wheel or conda binary using these install commands and select the CUDA version you want.
Then update your driver since 441.22 is too old for CUDA 11 and you would need >= 450.80.02 for minor version compatibility as described here.

Thank you @ptrblck for your reply,

kindly can you explain more what you mean by " Note that your local CUDA toolkit (and cuDNN) won’t be used unless you build from source or a custom CUDA extension when you are installing the binaries as these ship with their own CUDA runtime, cuDNN, NCCL. etc. " ?

Sorry but it is my first time working with Pytorch I am not familiar with it

You would only need to install a new NVIDIA driver without the full CUDA toolkit, which ships with e.g. the CUDA compiler (nvcc), CUDA Math libraries (e.g. cuBLAS, cuSOLVER, etc.).
The pip wheels and conda binaries which we are building will already ship directly with these required libraries or will install them as a dependency in your environment.

If you want to use a specific CUDA compiler or cuBLAS version etc. you would need to build PyTorch from source locally.
Custom CUDA extensions as described here will also use the locally installed CUDA toolkit.

In the current nightly releases, torch.compile would also depend on your local ptxas binary to JIT compile the OpenAI/Triton kernels if I’m not mistaken.