Can't find GPU or CUDA version


  • cuda11.7
  • 3060ti with driver version 516.59

When running the following code

import torch

it returns false and none. I’ve checked to see if I have cpuonly installed and I do not so that is not the cause. Running nvidia-smi and nvcc --version I get the following outputs respectively,

Tue Nov 15 14:46:23 2022
| NVIDIA-SMI 516.59 Driver Version: 516.59 CUDA Version: 11.7 |
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
| 0% 46C P8 21W / 200W | 1086MiB / 8192MiB | 3% Default |
| | | N/A |

Copyright (c) 2005-2022 NVIDIA Corporation
Built on Tue_May__3_19:00:59_Pacific_Daylight_Time_2022
Cuda compilation tools, release 11.7, V11.7.64
Build cuda_11.7.r11.7/compiler.31294372_0

I installed using the command given from the start locally page,

conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia

I’m not sure if this will help to debug but running

python -m torch.utils.collect_env


PyTorch version: 1.13.0
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A

OS: Microsoft Windows 10 Pro
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A

Python version: 3.9.12 (main, Apr 4 2022, 05:22:27) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19044-SP0
Is CUDA available: True
CUDA runtime version: 11.7.64
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060 Ti
Nvidia driver version: 516.59
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

Versions of relevant libraries:
[pip3] numpy==1.23.3
[pip3] torch==1.13.0
[pip3] torchaudio==0.13.0
[pip3] torchvision==0.14.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h59b6b97_2
[conda] mkl 2021.4.0 haa95532_640
[conda] mkl-service 2.4.0 py39h2bbff1b_0
[conda] mkl_fft 1.3.1 py39h277e83a_0
[conda] mkl_random 1.2.2 py39hf11a4ad_0
[conda] numpy 1.23.0 pypi_0 pypi
[conda] numpy-base 1.23.3 py39h4da318b_0
[conda] pytorch 1.13.0 py3.9_cuda11.7_cudnn8_0 pytorch
[conda] pytorch-cuda 11.7 h67b0de4_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 0.13.0 pypi_0 pypi
[conda] torchvision 0.14.0 pypi_0 pypi

From this, it directly shows Cuda is available and my GPU so I’m very confused why this isn’t reflected in the program. Any help is appreciated I’m very lost on what the issue might be.