WSL2 + CUDA + GeForce RTX 3090 not working

Hi there,

For the life of me I cannot get PyTorch to work with CUDA in WSL2. I’ve been struggling with this problem for a couple of days now, and I’ve followed about every tutorial you can find online about how to make Torch+CUDA+WSL2 work (including Nvidia’s and this one). Any help is much appreciated.

Windows/WSL2 specs:
Windows 11, OS build 22000.376
WSL2 kernel: 5.10.60.1-microsoft-standard-WSL2
Ubuntu 20.04

GPU specs:
Nvidia GeForce RTX 3090
Nvidia Driver Version: 510.06 (from nvidia-smi)
Cuda Version supported up to 11.6 (from nvidia-smi)

I installed PyTorch with CUDA support using conda packages: conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch (using Python 3.8.12)

However, running torch.cuda.is_available() returns False and e.g. torch.zeros(1).cuda() gives RuntimeError: No CUDA GPUs are available

Here’s the output from collect_env.py:

Collecting environment information…
PyTorch version: 1.10.1
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.8.12 | packaged by conda-forge | (default, Oct 12 2021, 21:57:06) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.60.1-microsoft-standard-WSL2-x86_64-with-glibc2.10
Is CUDA available: False
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 510.06
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.3.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.3.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.3.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.3.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.3.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.3.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.3.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.21.2
[pip3] torch==1.10.1
[pip3] torchaudio==0.10.1
[pip3] torchvision==0.11.2
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py38h95df7f1_0 conda-forge
[conda] mkl_fft 1.3.1 py38h8666266_1 conda-forge
[conda] mkl_random 1.2.2 py38h1abd341_0 conda-forge
[conda] numpy 1.21.2 py38h20f2e39_0
[conda] numpy-base 1.21.2 py38h79a1101_0
[conda] pytorch 1.10.1 py3.8_cuda11.3_cudnn8.2.0_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 0.10.1 py38_cu113 pytorch
[conda] torchvision 0.11.2 py38_cu113 pytorch

I’ve tried building PyTorch from source as well, build is successful but alas the same problem persists.

For completeness sake, I should mention that if I install PyTorch with cudatoolkit=10.2 then the GPU is found (torch.cuda.is_available returns True), but since GeForce RTX 3090 isn’t supported by CUDA 10 an error is produced for e.g. torch.zeros(1).cuda() saying that NVIDIA GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation.

When installing PyTorch in Windows (using conda and cudatoolkit=11.3) everything works perfectly.

2 Likes

Hi, is there any solution to this issue. I am also facing same error when working with wsl2, pytorch and detectron2. My system details:
GPU NVIDIA Quadro RTX 3000
CUDA 11.6
Driver Version 511.65

1 Like

I didn’t manage to fix the problem. Switched to a dual boot (Ubuntu + Windows) instead.

I have the same system and gpu sepcs as yours, and this works for me

conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c conda-forge
2 Likes

I can confirm that the solution provided by @Earth_Three has worked for me.

Here’s the system information from nvidia-smi
GPU: NVIDIA rtx3070
Driver version: 497.29
CUDA Version: 11.5