That’s expected, since the pip wheels and conda binaries ship with their own CUDA runtime.
Your local CUDA toolkit (shown via e.g. nvidia-smi
) would be used if you are building a custom CUDA extension or PyTorch from source.
Thanks for the env information. Based on this, you are indeed using the pip wheels with the CUDA10.2 runtime, which are broken on the Turing architecture (see the linked issue).
11.1 refers to the CUDA runtime version. You can install it by selecting CUDA 11.1
here:
pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html