I do seem to be able to run the CUDA examples:
mlucy@DESKTOP-MIBK3CH:~/cuda-samples/bin/x86_64/linux/release$ ./eigenvalues
Starting eigenvalues
GPU Device 0: "Ampere" with compute capability 8.6
Matrix size: 2048 x 2048
Precision: 0.000010
Iterations to be timed: 100
Result filename: 'eigenvalues.dat'
Gerschgorin interval: -2.894310 / 2.923303
Average time step 1: 0.920770 ms
Average time step 2, one intervals: 1.084130 ms
Average time step 2, mult intervals: 2.393340 ms
Average time TOTAL: 4.448770 ms
Test Succeeded!
When I run the command you gave, I get this, not a complaint about the driver:
mlucy@DESKTOP-MIBK3CH:~/cuda-samples/bin/x86_64/linux/release$ python -c "import torch; print(torch.randn(1).cuda())"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/mlucy/.local/lib/python3.8/site-packages/torch/cuda/__init__.py", line 217, in _lazy_init
torch._C._cuda_init()
RuntimeError: No CUDA GPUs are available
I followed the instructions at CUDA on WSL :: CUDA Toolkit Documentation for setting up CUDA with wsl2, and they said to install a display driver on the Windows side but not the Linux side.