GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation

I installed PyTorch with

pip3 install --pre torch torchvision torchaudio -f

And then in a Python session I ran:

import torch

which then raised the warning in the title.

I know this supposedly works for people with CUDA 11.1 (while I’m on 11.0) but before I ruin my life by trying to upgrade my CUDA I thought I’d check here to make sure I’m not missing anything else.

1 Like

Just as a sanity check, what does torch.version.cuda show?


@eqy thanks for jumping in. Well it shows 10.2. I understand the significance of this, but I don’t know why it is so or what to do about it.

On another note to provide more context, I’ve been working with PyTorch on my computer for a while now with no problems using pip3 install I just want to work with some of the latest FX tracing stuff which is why I want to upgrade to 1.8

If nvidia-smi shows CUDA 11.0, then the first thing I try to do when I see this (if there aren’t other users on the system or dependencies) is to just pip3 uninstall (or whatever other package managers might have installed PyTorch) until import torch no longer works in a Python interpreter. Then I try the install again and see if it works.

Unfortunately this kind of thing still appears in the year 2021 (e.g,.
[NEED HELP] Trouble with CUDA capability sm_86 - PyTorch Forums)

1 Like

Your local CUDA toolkit won’t be used unless you are building PyTorch from source or a custom CUDA extension, since the binaries ship with the CUDA runtime, which is specified by the install command.

I see that you were using the cu110 path, so switch to:

pip3 install --pre torch torchvision torchaudio -f

Also, since your current installation shows CUDA10.2, make sure it’s removed first, as explained by @eqy


I’m speechless…

I did not know that about the binaries shipping with CUDA. I just assumed I had to get the one that matches my toolkit.

Problem solved. Thank you!

1 Like

i am facing the same problem " i also dont know about binaries and what is mean by " building Pytorch from source ? and what is other way to build pytorch which can work me and can solve this problem "

Your 3090 will work, if you select CUDA11.1 here and install the pip wheels or conda binaries using the provided commands.

@ptrblck I have the following error with the CUDA11.4

NVIDIA GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37.

I installed PyTorch using this command:

conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia

I would be grateful if you please point me in the right direction. Should I install Nightly version or docker?

You have most likely another PyTorch installation with CUDA10.2 in your current environment, which conflicts with the new one.
Try to either uninstall all source builds, pip wheels, and conda binaries in the current environment or create a new virtual environment and reinstall PyTorch again.

1 Like

Thanks @ptrblck . Should I install with this box for cuda 11.4?conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia

I’m not sure where a “box for cuda11.4” shows up, but I guess you mean the selection in the installation guide?
If so, CUDA11.4 isn’t available and you should use the CUDA11.1 selector.

1 Like

Thanks, @ptrblck . Yes, I mean the selection in the installation guide.

I have installed torch and cudakit 11 via conda from the link above and I when I run torch.version.cuda I get 10.2 instead of 11.3…

Double post from here with a follow-up.

Thank you for your great support in this forum. However, I am still confused every time what do you mean by pytorch from source? What do you mean pip wheels or conda binaries.

What I know that it is pip command or conda command we use to install any package. Thank you

“From source” means you are building PyTorch locally on your workstation from its source code by compiling the code.
To do so you would git clone the PyTorch source code, install the compiler toolchain(s), and build it locally on your workstation.

“pip wheels” are the pip binaries installed via pip install torch ....

“conda binaries” are the conda binaries installed via conda install pytorch ....

I hope that clears things up.