Issue
when I check if PyTorch is available in my Jupiter lab notebook i have False
print(torch.cuda.is_available()) ---> False
If I’m trying to install PyTorch as was suggested in topic above i have following error
(gpt_env2) PS C:\Users\NoName> pip install [url hidden]/whl/nightly/cu121/torch-2.1.0.dev20230811%2Bcu121-cp311-cp311-win_amd64.whl
ERROR: torch-2.1.0.dev20230811+cu121-cp311-cp311-win_amd64.whl is not a supported wheel on this platform.
I have to mention that i have Intell based laptop with NVIDIA GPU, so i tried below, and doesn’t work as well
(gpt_env) PS C:\Users\NoName> pip install [url hidden]/whl/nightly/cu121/torch-2.1.0.dev20230811%2Bcu121-cp311-cp311-win_x86_64.whl
ERROR: torch-2.1.0.dev20230811+cu121-cp311-cp311-win_x86_64.whl is not a supported wheel on this platform.
Could you please help me resolve this issue and make PyTorch compiled to work use GPU in the first place
Appreciate your help in advance,
Best regards,
Maks.
problem solved. The issue was simple I wasn’t read the PyTorch site good enough. Especcialy that part that generates for anaconda command. As of Apr-12-2024, this banner showed that minimum required version of python is >= 3.8, and i was setup my environment for python 3.7. so 2 commands fixed everything:
thank you for this hint!! I’ve struggled with pytorch-cuda=12.4 for a whole day and always got a cpu version. Now I just changed it to 12.1 then everything got fine… what a nightmare…
Hello! Here’s the situation: the NVIDIA Studio driver 565.90 (GTX 1650 Ti) is installed, and PyTorch version 12.4 is installed. When running the command import torch print(torch.cuda.is_available()), it returns False. Everything is directed to the CPU.
What should I do? When checking the version, it shows 2.5.1+cpu. I need PyTorch to direct tasks to the GPU and work with CUDA built into the graphics card driver. Where is the bottleneck, and how can I work around it? Thank you. Heeeeelp (Windows 10 64) It’s possible that version 12.4 doesn’t support the driver or, conversely, doesn’t support the GPU. Should I install the CUDA Toolkit? If so, which version? Or should I set something in the environment variables? (I’m writing my script in Python 3.10, VS Code) install version 12.1 … result : [Running] python -u “d:\TRAN v1\CUda.py”
PyTorch is using CPU.
Tensor is allocated on: cpu
2.5.1+cpu
I faced the same problem of trying to install pytorch-cuda=12.4 but it kept installing the cpu version. Installing on a fresh environment worked in my case.
You don’t need to install a CUDA toolkit as the PyTorch binaries ship with their own CUDA runtime dependencies.
Select any CUDA version from the install matrix on pytorch.org, copy/paste the command into your terminal, and execute it.