No CUDA GPUs available

When i try to run my code, i get this message:
torch._C._cuda_init()
RuntimeError: No CUDA GPUs are available

But if i run this command it looks like it is detecting it:
python -c "import torch; print(torch.cuda.is_available()); print(torch.cuda.device_count()); print(torch.cuda.get_device_name(0))

output:
True
1
NVIDIA GeForce RTX 4060 Ti

And with the command nvidia-smi:


i am using WSL2 ubuntu. Don´t know which is the problem. It looks like pytorch is detecting my gpu, but quen i try and run the code it fails

Based on the error message it seems PyTorch is able to detect your GPU via nvml (which is also used by nvidia-smi) but fails to initialize it which often points to a driver or setup issue.