I know this supposedly works for people with CUDA 11.1 (while I’m on 11.0) but before I ruin my life by trying to upgrade my CUDA I thought I’d check here to make sure I’m not missing anything else.
@eqy thanks for jumping in. Well it shows 10.2. I understand the significance of this, but I don’t know why it is so or what to do about it.
On another note to provide more context, I’ve been working with PyTorch on my computer for a while now with no problems using pip3 install https://download.pytorch.org/whl/cu110/torch-1.7.1%2Bcu110-cp38-cp38-linux_x86_64.whl. I just want to work with some of the latest FX tracing stuff which is why I want to upgrade to 1.8
If nvidia-smi shows CUDA 11.0, then the first thing I try to do when I see this (if there aren’t other users on the system or dependencies) is to just pip3 uninstall (or whatever other package managers might have installed PyTorch) until import torch no longer works in a Python interpreter. Then I try the install again and see if it works.
Your local CUDA toolkit won’t be used unless you are building PyTorch from source or a custom CUDA extension, since the binaries ship with the CUDA runtime, which is specified by the install command.
I see that you were using the cu110 path, so switch to:
i am facing the same problem " i also dont know about binaries and what is mean by " building Pytorch from source ? and what is other way to build pytorch which can work me and can solve this problem "
@ptrblck I have the following error with the CUDA11.4
NVIDIA GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37.
You have most likely another PyTorch installation with CUDA10.2 in your current environment, which conflicts with the new one.
Try to either uninstall all source builds, pip wheels, and conda binaries in the current environment or create a new virtual environment and reinstall PyTorch again.
I’m not sure where a “box for cuda11.4” shows up, but I guess you mean the selection in the installation guide?
If so, CUDA11.4 isn’t available and you should use the CUDA11.1 selector.
Thank you for your great support in this forum. However, I am still confused every time what do you mean by pytorch from source? What do you mean pip wheels or conda binaries.
What I know that it is pip command or conda command we use to install any package. Thank you
“From source” means you are building PyTorch locally on your workstation from its source code by compiling the code.
To do so you would git clone the PyTorch source code, install the compiler toolchain(s), and build it locally on your workstation.
“pip wheels” are the pip binaries installed via pip install torch ....
“conda binaries” are the conda binaries installed via conda install pytorch ....
excuse me ptrblck! i am facing the same problem of
NVIDIA GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37.
with CUDA Version 12.1 (checked by nvidia-smi), and a CUDA runtime version of 10.1 (checked by nvcc -V). which version of pytorch should i install? i searched in pytorch.org for CUDA12.1 but there is no suitable pytorch version
Based on the error message you have installed an old PyTorch version which shipped with CUDA 10.2 and you should see it via print(torch.version.cuda).
Your locally installed CUDA toolkit won’t be used unless you build PyTorch from source or a custom CUDA extension. The current releases binaries ship with CUDA 11.7 and 11.8 and you can install either for your device.
omg thanks A LOT for your advice! Do you mean that the installed CUDA10.1 is no need to be in concern if I’m in a created conda env, and the cuda driver version 12.1 is compatible with pytorch of higher version (shipped with cuda 11.7 or 11.8)?