Hi guys, I have following issue and I hope I’m in the right category.
So I have following setup:
RTX2080 Ti
Python 3.9.4
NVIDIA-SMI Driver Version 461.33
When I run nvidia-smi
, it returns said driver version and CUDA Version 11.2 (which, as I understand, is the most recent CUDA version my GPU supports, right?). When I run nvcc -V
it returns:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Tue_Sep_15_19:12:04_Pacific_Daylight_Time_2020
Cuda compilation tools, release 11.1, V11.1.74
Build cuda_11.1.relgpu_drvr455TC455_06.29069683_0
which seems to be right since I’ve installed CUDA Toolkit 11.1 because I want it to work with PyTorch. I’m installing PyTorch (from the “getting started” page) with:
pip3 install torch==1.8.1+cu111 torchvision==0.9.1+cu111 torchaudio===0.8.1 -f https://download.pytorch.org/whl/torch_stable.html
Now when I start CMD or run any IDE/Editor and try to import torch
, I’m getting following error:
Error loading "C:\Users\myusername\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\lib\cudnn_cnn_train64_8.dll" or one of its dependencies.
I don’t exactly understand what I’m doing wrong. Is there any chance that it’s not working because there is an “ü” in my username which I’ve replaced by “myusername” for privacy purposes? However, when I install the non-CUDA-version, I can import torch
just fine.