Torch not compiled with CUDA enabled

Thank you, Dwight for trying to help

print(torch.cuda.is_available())

output false for me

but running nvidia-smi from the anaconda prompt showing I have CUDA Version 11.2

(deeplearning) C:\WINDOWS\system32>nvidia-smi
Sun Feb 21 10:06:38 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.89       Driver Version: 460.89       CUDA Version: 11.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name            TCC/WDDM | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  GeForce RTX 2060   WDDM  | 00000000:01:00.0 Off |                  N/A |
| N/A   40C    P8     9W /  N/A |    164MiB /  6144MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

and when I run conda install pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch

it is showing me, all requested packages already installed

I tried to uninstall CUDA and Pytorch and install them again but nothing changed