I am running miniconda in Ubuntu 22.04 under WSL. I first installed wsl cuda toolkit from Nvidia and then installed miniconda. The original cuda version was 11.5 which I then upgraded to 12.4 with conda install nvidia::cuda-toolkit. Then I installed pytorch with cuda support 12.1 using pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121. Now when I print torch version cuda, it is returning 12.1. However when I try to do print(torch.cuda.is_available()), it returns the following:
/root/miniconda3/lib/python3.12/site-packages/torch/cuda/init.py:128: UserWarning: CUDA initialization: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 2: out of memory (Triggered internally at …/c10/cuda/CUDAFunctions.cpp:108.)
return torch._C._cuda_getDeviceCount() > 0
False
If I try print(“Number of GPUs:”, torch.cuda.device_count()) it will return 4.