Trouble installing torch with CUDA using conda on WSL2

I’m having trouble getting conda to install pytorch with CUDA on WSL2. I have done the necessary setup for WSL2 on Windows 11, running Ubuntu 20.04 fully updated and the latest Nvidia WSL drivers (version 510.06, as per the Nvidia WSL website).

However when I try to install pytorch via conda as per the usual command

conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch

I keep getting the cpu-only version of pytorch.

pytorch                   1.10.0              py3.9_cpu_0    pytorch
pytorch-mutex             1.0                         cpu    pytorch
torchaudio                0.10.0                 py39_cpu  [cpuonly]  pytorch
torchvision               0.11.1                 py39_cpu  [cpuonly]  pytorch

Would anyone know how to get conda to pull the right packages for pytorch with CUDA enabled? Thanks!

1 Like

I have exactly same problem. I kept deleting it with conda uninstall cpuonly etc. But it just installs the cpu version only. My wsl2 sees the cuda without a problem and I have even installed 3Gb of all that crap of nvidia cuda tools inside ubuntu.
Have you solved the problem in the end? It really is a nightmare.

1 Like

I found that you’d really have to get rid of all traces of Linux nvidia drivers and CUDA installations. If you really have to, you could start from a clean WSL installation, but you can also search and there should be plenty of instructions on doing this. Do a clean install of windows nvidia drivers and make sure you install the WSL2 version of the windows nvidia drivers. Then conda install should be able to pull the right version.

First post may be old, but for anyone who encounter the same situation, here is what worked for me:
I have Windows 11, with WSL2 installed with Ubuntu 20.04. My computer has a Nvidia RTX3060 Laptop.
I had the same problem, when installing with conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch, it seems that only the CPU version of PyTorch was installed.
So right after this installation, I removed torch, torchaudio, torchvision using PIP:
pip uninstall torch torchaudio torchvision
Then, I re-installed the “right” versions for me:
pip install torch==1.8.1+cu111 torchvision==0.9.1+cu111 torchaudio==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html
There might be a more direct way to it but since it works for me now, I didn’t try any new ways.

4 Likes

With Windows11 + Nvidia RTX 2080Ti + Nvidia Driver 527.56 + Ubuntu 22.04 + WSL Ubuntu Kernel 5.15.79.1, I did not face any issues with installing GPU-based pytorch.

I first verified if the nvidia-smi command works in the WSL Ubuntu terminal, then installed Miniconda and ran conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.6 -c pytorch -c conda-forge.

When using the command python -c "import torch; print (torch.cuda.get_device_name())", I get my GPUs name.

I have the same issue right now; so far, none of the above has worked.