NVIDIA GeForce RTX 3080 Ti with CUDA capability sm_86 is not compatible with the current PyTorch installation

Hey PyTorch Community,

currently i working on a project to train an agent.
Unfortunately i can’ run my script due to this error:

NVIDIA GeForce RTX 3080 Ti with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
If you want to use the NVIDIA GeForce RTX 3080 Ti GPU with PyTorch, please check the instructions at Start Locally | PyTorch

I using Ubuntu 20.04.

Informations:

RUN nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243
RUN python -c "import torch; print(torch.__version__)"
1.9.0+cu111
RUN nvidia-smi
Mon May 15 20:02:00 2023       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.182.03   Driver Version: 470.182.03   CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:2D:00.0  On |                  N/A |
|  0%   38C    P8    26W / 350W |    602MiB / 12045MiB |      8%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1125      G   /usr/lib/xorg/Xorg                175MiB |
|    0   N/A  N/A      1416      G   /usr/bin/gnome-shell               33MiB |
|    0   N/A  N/A     22053      G   ...464272974724260903,262144      124MiB |
|    0   N/A  N/A     23840      G   .../ros/noetic/lib/rviz/rviz       16MiB |
|    0   N/A  N/A     24776      C   python                            247MiB |
+-----------------------------------------------------------------------------+

Thank you very much ! :slight_smile:

Could you update PyTorch to the latest stable or nightly release as it’s supporting your GPU.
Also, based on the error message it seems you are using a PyTorch binary with CUDA 10.2 which doesn’t match your debug print output.