GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation

I installed PyTorch with

pip3 install --pre torch torchvision torchaudio -f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html

And then in a Python session I ran:

import torch
torch.tensor(1).cuda()

which then raised the warning in the title.

I know this supposedly works for people with CUDA 11.1 (while I’m on 11.0) but before I ruin my life by trying to upgrade my CUDA I thought I’d check here to make sure I’m not missing anything else.

1 Like

Just as a sanity check, what does torch.version.cuda show?

2 Likes

@eqy thanks for jumping in. Well it shows 10.2. I understand the significance of this, but I don’t know why it is so or what to do about it.

On another note to provide more context, I’ve been working with PyTorch on my computer for a while now with no problems using pip3 install https://download.pytorch.org/whl/cu110/torch-1.7.1%2Bcu110-cp38-cp38-linux_x86_64.whl. I just want to work with some of the latest FX tracing stuff which is why I want to upgrade to 1.8

If nvidia-smi shows CUDA 11.0, then the first thing I try to do when I see this (if there aren’t other users on the system or dependencies) is to just pip3 uninstall (or whatever other package managers might have installed PyTorch) until import torch no longer works in a Python interpreter. Then I try the install again and see if it works.

Unfortunately this kind of thing still appears in the year 2021 (e.g,.
[NEED HELP] Trouble with CUDA capability sm_86 - PyTorch Forums)

1 Like

Your local CUDA toolkit won’t be used unless you are building PyTorch from source or a custom CUDA extension, since the binaries ship with the CUDA runtime, which is specified by the install command.

I see that you were using the cu110 path, so switch to:

pip3 install --pre torch torchvision torchaudio -f https://download.pytorch.org/whl/nightly/cu111/torch_nightly.html

Also, since your current installation shows CUDA10.2, make sure it’s removed first, as explained by @eqy

5 Likes

I’m speechless…

I did not know that about the binaries shipping with CUDA. I just assumed I had to get the one that matches my toolkit.

Problem solved. Thank you!

1 Like

i am facing the same problem " i also dont know about binaries and what is mean by " building Pytorch from source ? and what is other way to build pytorch which can work me and can solve this problem "

Your 3090 will work, if you select CUDA11.1 here and install the pip wheels or conda binaries using the provided commands.

@ptrblck I have the following error with the CUDA11.4

NVIDIA GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37.

I installed PyTorch using this command:

conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia

I would be grateful if you please point me in the right direction. Should I install Nightly version or docker?

You have most likely another PyTorch installation with CUDA10.2 in your current environment, which conflicts with the new one.
Try to either uninstall all source builds, pip wheels, and conda binaries in the current environment or create a new virtual environment and reinstall PyTorch again.

1 Like

Thanks @ptrblck . Should I install with this box for cuda 11.4?conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia

I’m not sure where a “box for cuda11.4” shows up, but I guess you mean the selection in the installation guide?
If so, CUDA11.4 isn’t available and you should use the CUDA11.1 selector.

1 Like

Thanks, @ptrblck . Yes, I mean the selection in the installation guide.

I have installed torch and cudakit 11 via conda from the link above and I when I run torch.version.cuda I get 10.2 instead of 11.3…

Double post from here with a follow-up.

Thank you for your great support in this forum. However, I am still confused every time what do you mean by pytorch from source? What do you mean pip wheels or conda binaries.

What I know that it is pip command or conda command we use to install any package. Thank you

“From source” means you are building PyTorch locally on your workstation from its source code by compiling the code.
To do so you would git clone the PyTorch source code, install the compiler toolchain(s), and build it locally on your workstation.

“pip wheels” are the pip binaries installed via pip install torch ....

“conda binaries” are the conda binaries installed via conda install pytorch ....

I hope that clears things up.

1 Like

excuse me ptrblck! i am facing the same problem of

NVIDIA GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37.

with CUDA Version 12.1 (checked by nvidia-smi), and a CUDA runtime version of 10.1 (checked by nvcc -V). which version of pytorch should i install? i searched in pytorch.org for CUDA12.1 but there is no suitable pytorch version :sob:

Based on the error message you have installed an old PyTorch version which shipped with CUDA 10.2 and you should see it via print(torch.version.cuda).
Your locally installed CUDA toolkit won’t be used unless you build PyTorch from source or a custom CUDA extension. The current releases binaries ship with CUDA 11.7 and 11.8 and you can install either for your device.

omg thanks A LOT for your advice! Do you mean that the installed CUDA10.1 is no need to be in concern if I’m in a created conda env, and the cuda driver version 12.1 is compatible with pytorch of higher version (shipped with cuda 11.7 or 11.8)?