Hi !
I build docker image from PyTorch image 2.2.1-cuda12.1-cudnn8-devel in WSL2
And I’ve already installed NVIDIA Container Toolkit and restart the Docker.
I can run nvidia-smi
inside Docker container.
But when I run in container:
torch.cuda.is_available()
it return False
torch.backends.cudnn.enabled
it return True
I also try 2.0.1-cuda11.7-cudnn8-devel
it still have same problem
However, I try Pytorch image latest. there is no problem,
torch.cuda.is_available()
and torch.backends.cudnn.enabled
both return True.
> nvidia-smi
Fri Mar 22 11:04:15 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.40.07 Driver Version: 551.52 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4090 On | 00000000:01:00.0 On | 0 |
| 0% 47C P8 13W / 450W | 719MiB / 23028MiB | 3% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 24 G /Xwayland N/A |
| 0 N/A N/A 35 G /Xwayland N/A |
| 0 N/A N/A 109 G /Xwayland N/A |
+-----------------------------------------------------------------------------------------+
> nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Jun__8_16:49:14_PDT_2022
Cuda compilation tools, release 11.7, V11.7.99
Build cuda_11.7.r11.7/compiler.31442593_0
I have no idea which part went wrong.
Can anyone help me?