NVIDIA GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation

Hello, I’m getting following error:
NVIDIA GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
If you want to use the NVIDIA GeForce RTX 3090 GPU with PyTorch, please check the instructions at Start Locally | PyTorch

I’ve tried to install compatible version but I don’t think I’m doing it correctly. Can anyone help me to get it installed correctly?

$ python3 -m pip list | grep torch
torch                   1.10.1              
torchvision             0.11.2
$ which pip3
/usr/bin/pip3

CUDA Version : 11.4

2 Likes

You’ve most likely installed the binaries with the CUDA10.2 runtime, which is incompatible with your 3090. Install the pip wheels or conda binaries with CUDA11 and it should work.

1 Like

Can you clarify why you tagged torchx? It doesn’t seem like you are using it, but I want to make sure that is the case.

Yes, you are right. Did mistake. I’ve fixed it. It works now. Thanks for replying.

I am facing the exact error. Following your advice I did the following:

  • To install cuda 11 I used following command:
    sudo sh cuda_11.4.0_470.42.01_linux.run --toolkit --silent --override

And I confirm to see the following when I use nvcc -V linux command:

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Wed_Jun__2_19:15:15_PDT_2021
Cuda compilation tools, release 11.4, V11.4.48
Build cuda_11.4.r11.4/compiler.30033411_0

However, I still get the above error! I am wondering if you might have any advice/comment?
Thanks

https://download.pytorch.org/whl/cu114/torch_stable.html doesn’t seem to be a valid URL so I guess you might have installed the (default) CUDA10.2 wheels?
What did the install log show and what does torch.version.cuda as well as torch.cuda.get_arch_list() return?

Many thanks @ptrblck for quick response and the smart suggestions. You are correct! Here is the output of the two commands

torch.cuda.get_arch_list()
[‘sm_37’, ‘sm_50’, ‘sm_60’, ‘sm_70’]
torch.version.cuda
‘10.2’

apparently the cuda version used by torch is still 10.2
I uninstalled the torch and installed the pip3 command using this valid link instead :https://download.pytorch.org/whl/cu113/torch_stable.html

However, I still note that the torch.version.cuda returns ‘10.2’ ! while both nvidia-smi and nvcc -V both are cuda 11 ! could it be because I have different versions of cuda toolkit installed (btw, I have the link to the latest in my PATH and LD_LIBRARY_PATH)

No, since the pip wheels and conda binaries ship with their own CUDA runtime. Your local CUDA toolkit will be used if you are building PyTorch from source or a custom CUDA extension.
Run the uninstall command a few times until pip and conda don’t find any PyTorch installations anymore and verify it via pip list | grep torch and conda list | grep torch. Then reinstall the correct wheels again. Alternatively, create a new virtual environment and install the CUDA11 wheels there.

1 Like

@ptrblck, many thanks for your help.
Confirming that the problem resolved by:
1- creating new conda env
2- installing pytorch 1.9.0 with cuda 11.1 wheel

hello
i trained by pytorch before
but today try to train on RTX 3090 with cuda 11.7
when i import torch and use this code “torch.cuda.get_device_properties(‘cuda’)” and get this error:
NVIDIA GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.

downgrade cuda to 11.6 but still not working.
use anaconda now working but at the end of train get error "AttributeError: ‘NoneType’ object has no attribute ‘_free_weak_ref’ "
has anyone found the solution to solve this problem?

1 Like

I guess you built PyTorch from source using a local CUDA toolkit? If so, then you have set the wrong GPU architectures for this build as sm_86 is missing.
The pip wheels and conda binaries with the 11.7 CUDA runtime are still work in progress and I’m currently adding the manywheel builds.
In case you tried to “build” the CUDA 11.7 binary URL, note that it would install the default wheel with CUDA 10.2 and would explain the error.

dear Ptrblck
normally i install CUDA from CUDA Toolkit 11.7 Downloads | NVIDIA Developer and also use archive for old version also use pip because have problem with conda and ROS system
so set virtualenv and source it to install Pkgs
until now i never config GPU architecture and it works maybe it set automatically.
normally i use cuda 11.6 and it work for nvidia version 510.47.3
but on this system i can’t install nvidia version 510.47.3 and its version is 510.73.5 i dont know is the cuda 11.6 works on this version or not
at the end i want to find my mistake that makes these error.
thanks

If I understand you correctly, you are not building from source but are installing the pip wheels.
Note that in this case the local CUDA toolkit won’t be used unless you are building a custom CUDA extension (or PyTorch from source of course).
Given the error message you have most likely installed PyTorch with the CUDA 10.2 runtime, which is too old for your Ampere GPU.
Use the CUDA 11 runtime via:

pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113

and it should work (make sure to uninstall the old binaries before installing the new one).

2 Likes

i install “torch and torchvision” by this code from yoloV5 requirement “pip install -r requirement”
torch>=1.7.0
torchvision>=0.8.1

if i understand you tell me to remove all of cuda and torch and torchvision
then install cuda 11.6 from nvidia developer
and at the end install the torch and torchvision from https://download.pytorch.org/whl/cu113 as u said

No, keep your local CUDA toolkit (11.7) and just uninstall the PyTorch pip wheels.
Then install the PyTorch pip wheel with the CUDA 11.3 runtime (your local CUDA 11.7 toolkit won’t be used to run the pip wheels and can stay).

i can’t understand with the pip wheels
do you mean pytorch-wheel-installer · PyPI ?
python env use local cuda and installation save into lib folder
as i understand you tell me i can install pytorch with and other cuda runtime
and it works.
i have another problem with training after use conda and fix problem and try i got this error
AttributeError: 'NoneType' object has no attribute '_free_weak_ref' after fusing layers · Issue #7339 · ultralytics/yolov5 · GitHub
do you have any solution for this?

No, I was referring to the pip install torch ... command.

I assume you’ve solved your initial issue using the conda binaries? If so, great!

No, I don’t know what might be causing this issue in the Yolo repo.

aha thanks
so i uninstall the torch by using pip uninstall torch -y
then install torch by

pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113
1 Like

If you are still hitting the:

NVIDIA GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.

error, then yes.
If PyTorch itself works fine now with your new conda binary then I would just leave it s it is.

Hi! I faced a similar problem, but could not use conda to solve it. I managed to set it up without conda. Here are steps:

  1. Update your CUDA Toolkit. All steps to do that are listed here. You just need to select your architecture, distribution and its version then download and install. Before you do that it is usually a good idea to go to pytorch website and check what is the latest supported CUDA version. Sometimes you may overshoot and install a version that is too new so it is worth checking out.

  2. Execute required post-install actions. In my case the only thing that I needed to do is add export PATH=/usr/local/cuda-11.6/bin${PATH:+:${PATH}} to my .bashrc file.

  3. Confirm that your CUDA is installed correctly. Run nvcc --version. Do not really on CUDA version that is displayed by nvidia-smi. Sometimes those two commands show different values and nvcc --version is what we care about.

  4. Uninstall torch and torchvision from your Python environment - pip uninstall torch torchvision.

  5. Once again go pytorch website and select a configuration that works for you. You will get pip install command that is ready to use. In my case it was pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116

  6. Test is torch installed correctly. Run python3 in terminal, and than:

>>> import torch
>>>  torch.version.cuda
'11.6'
>>> torch.cuda.get_arch_list()
['sm_37', 'sm_50', 'sm_60', 'sm_70', 'sm_75', 'sm_80', 'sm_86']

Good luck!