Help with Install Torch with Cuda

Hello!

I am facing issues while installing and using PyTorch with CUDA support on my computer. Here are some details about my system and the steps I have taken:

System Information:

  • Graphics Card: NVIDIA GeForce GTX 1050 Ti
  • NVIDIA Driver Version: 566.03
  • CUDA Version (from nvidia-smi): 12.7
  • CUDA Version (from nvcc): 11.7

Steps Taken:

  1. I installed Anaconda and created an environment named pytorch_env.
  2. I installed PyTorch, torchvision, and torchaudio using the command:
    conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
    
  3. I checked the installation by running Python and executing the following commands:
    import torch
    print(torch.__version__)           # PyTorch Version: 2.4.1
    print(torch.cuda.is_available())   # CUDA Availability: False
    

Problem:

Even though PyTorch is installed, CUDA availability returns False. I have checked the NVIDIA drivers and the installation of the CUDA Toolkit, but the issue persists.

Questions:

  1. How can I properly configure PyTorch to work with CUDA?
  2. Do I need to install a different version of PyTorch or NVIDIA drivers to resolve this issue?
  3. Are there any additional steps I could take to troubleshoot this problem?

I would appreciate any help or advice!

What exactly did you check to make sure the driver works?
Your locally installed CUDA toolkit won’t be used as PyTorch binaries ship with their own CUDA runtime dependencies.

Your install command looks also wrong as it does not correspond to any command from our install matrix.

We checked several key aspects to ensure that the driver works correctly with PyTorch:

  1. Driver Verification: We used the nvidia-smi command to confirm that the NVIDIA driver is installed and functioning. This command shows the driver version and the CUDA version it supports, ensuring compatibility with PyTorch.
  2. CUDA Toolkit: We noted that the locally installed CUDA toolkit is not utilized by PyTorch binaries, as they come with their own CUDA runtime dependencies. This means we don’t need to worry about having a separate CUDA installation unless we are building from source.
  3. Installation Command: We carefully reviewed the installation command used for PyTorch. It should match the official installation matrix provided on the PyTorch website. For example, we used:

bash

pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

This command ensures that we are installing the correct version of PyTorch with CUDA support.
4. CUDA Availability Check: After installation, we ran a simple check in Python:

python

import torch
print(torch.cuda.is_available())

This confirmed whether PyTorch could access the GPU.
5. Environment Setup: We created a clean virtual environment to avoid conflicts with other packages, ensuring that all dependencies were correctly installed without any remnants from previous installations.

By following these steps, we ensured that everything was set up properly for using PyTorch with GPU support.

In your question, there is an implication that you believe you know how to solve the problem. If you have specific recommendations or advice regarding this, please share! It could help us better understand the situation and find an optimal solution.

I’m sharing the process of installing PyTorch with CUDA support on my computer. Below are the steps and verification results that I performed.

Checking CUDA Version

bash

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Tue_May__3_19:00:59_Pacific_Daylight_Time_2022
Cuda compilation tools, release 11.7, V11.7.64
Build cuda_11.7.r11.7/compiler.31294372_0

Checking Python Version

bash

Python 3.12.4

Checking pip Version

bash

pip 24.2 from E:\tyflow\Tools\python\lib\site-packages\pip (python 3.10)

Checking Installed PyTorch Version

bash

PyTorch Version: 2.3.1

Checking Available CUDA Devices

bash

Number of available CUDA devices: CUDA is not available.

Since CUDA is not available, I decided to reinstall PyTorch.

Uninstalling Current Versions of PyTorch

bash

Found existing installation: torch 2.5.0
Uninstalling torch-2.5.0:
  Successfully uninstalled torch-2.5.0
Found existing installation: torchvision 0.20.0
Uninstalling torchvision-0.20.0:
  Successfully uninstalled torchvision-0.20.0
Found existing installation: torchaudio 2.5.0
Uninstalling torchaudio-2.5.0:
  Successfully uninstalled torchaudio-2.5.0

Installing PyTorch with CUDA Support

bash

Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu117
Collecting torch
Using cached torch-2.5.0-cp310-cp310-win_amd64.whl.metadata (28 kB)
Collecting torchvision
Using cached torchvision-0.20.0-cp310-cp310-win_amd64.whl.metadata (6.2 kB)
Collecting torchaudio
Using cached torchaudio-2.5.0-cp310-cp310-win_amd64.whl.metadata (6.5 kB)
Requirement already satisfied: filelock in e:\tyflow\tools\python\lib\site-packages (from torch) (3.16.1)
...
Successfully installed torch-2.5.0 torchaudio-2.5.0 torchvision-0.20.0

Checking Installation of PyTorch with CUDA

bash

CUDA available: False

Rechecking PyTorch Version After Reinstallation

bash

PyTorch Version: 2.3.1

Rechecking Available CUDA Devices After Reinstallation

bash

Number of available CUDA devices: 0.

This is unfortunately not a valid driver test and you should instead verify if any CUDA application is able to run on your GPU.

This command looks correct, but does not match any of your other outputs.

E.g. here you are claiming to have torch==2.3.1 installed

while you are uninstalling torch==2.5.0:

Later you are reinstalling torch==2.5.0:

while the check again shows torch==2.3.1:

I would recommend starting with a clean and new environment, copy/pasting the install command from our website, and install a single version of PyTorch.
The installation log should also show if a CUDA-enabled binary is being installed and you can double check it via torch.cuda.version.

Thank you for taking the time to respond; I really appreciate it. I want to clarify about the new environment: should I uninstall Python and reinstall it? Or is it enough to create a virtual environment? As for the environment, I set it up twice—once with pip and once with Anaconda. I installed it using the command from the website, but during testing, I received a message saying that there is no graphics card.
Or do I need to uninstall Python?

To understand the problem, I will show the test results I obtained using CUDA.

C:\Users\ASROCK>nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Tue_May__3_19:00:59_Pacific_Daylight_Time_2022
Cuda compilation tools, release 11.7, V11.7.64
Build cuda_11.7.r11.7/compiler.31294372_0

C:\Users\ASROCK>where nvcc
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.7\bin\nvcc.exe

CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: “NVIDIA GeForce GTX 1050 Ti”
CUDA Driver Version / Runtime Version 12.7 / 11.7
CUDA Capability Major/Minor version number: 6.1
Total amount of global memory: 4096 MBytes (4294705152 bytes)
( 6) Multiprocessors, (128) CUDA Cores/MP: 768 CUDA Cores
GPU Max Clock rate: 1392 MHz (1.39 GHz)
Memory Clock rate: 3504 Mhz
Memory Bus Width: 128-bit
L2 Cache Size: 1048576 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: zu bytes
Total amount of shared memory per block: zu bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: zu bytes
Texture alignment: zu bytes
Concurrent copy and kernel execution: Yes with 5 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
CUDA Device Driver Mode (TCC or WDDM): WDDM (Windows Display Driver Model)
Device supports Unified Addressing (UVA): Yes
Device supports Compute Preemption: Yes
Supports Cooperative Kernel Launch: Yes
Supports MultiDevice Co-op Kernel Launch: No
Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 12.7, CUDA Runtime Version = 11.7, NumDevs = 1, Device0 = NVIDIA GeForce GTX 1050 Ti
Result = PASS

C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.7\extras\demo_suite>bandwidthTest.exe > c:\1\222.txt

[CUDA Bandwidth Test] - Starting…
Running on…

Device 0: NVIDIA GeForce GTX 1050 Ti
Quick Mode

Host to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 12643.2

Device to Host Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 12514.7

Device to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 96155.2

Result = PASS

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

System Information:
Graphics Card: NVIDIA GeForce GTX 1050 Ti
CUDA Driver Version: 12.7
CUDA Runtime Version: 11.7
Total Memory: 4096 MB
Testing Results:
Device Query:
Confirmed that the graphics card is functioning correctly and supports CUDA.
All information about the graphics card specifications was successfully retrieved.
Bandwidth Test:
Host to Device: 12,643.2 MB/s
Device to Host: 12,514.7 MB/s
Device to Device: 96,155.2 MB/s
The results show that the data transfer bandwidth between the host and device is within the normal range, indicating good system performance.
Conclusions:
Your system fully supports CUDA and is functioning correctly.
The data transfer bandwidth is at an acceptable level, making your graphics card suitable for computationally intensive tasks.
You can use CUDA for developing and running applications optimized for your graphics card.

I express my deep gratitude for the attention and time you have given.

No, don’t uninstall Python itself.
Just make sure no PyTorch installation can be found before trying to install a new version.
I.e. run these commands a few times in your base environment (i.e. without activating a new virtual environment):

pip uninstall torch -y
pip uninstall torch -y
pip uninstall torch -y
...
conda uninstall pytorch -y
conda uninstall pytorch -y
conda uninstall pytorch -y
...

The terminal should then claim no further installations were detected. Once this is done, create a new virtual environment, e.g. via conda, and install a single PyTorch binary there using an install command from our website.
During the installation check which packages are installed as the pip wheel as well as the conda binary should pull either directly CUDA runtime libraries or would indicate the CUDA runtime version in the binary name of PyTorch.

Afterwards, check which version was installed via:

import torch
print(torch.__version__)
print(torch.version.cuda)

which should show the selected CUDA runtime version and perform a quick test to allocate a random tensor on the GPU:

print(torch.randn(1).cuda())

Hi , sorry …

Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] on win32
Type “help”, “copyright”, “credits” or “license” for more information.

import torch
print(torch.version)
2.3.1+cpu
print(torch.version.cuda)
None
print(torch.randn(1).cuda())
Traceback (most recent call last):
File “”, line 1, in
File “C:\Users\ASROCK\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\cuda_init_.py”, line 284, in _lazy_init
raise AssertionError(“Torch not compiled with CUDA enabled”)
AssertionError: Torch not compiled with CUDA enabled

“command from our website” - please write what it should be like

Microsoft Windows [Version 10.0.19045.5011]
(c) Microsoft Corporation. All rights reserved.

C:\Users\ASROCK>nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Tue_May__3_19:00:59_Pacific_Daylight_Time_2022
Cuda compilation tools, release 11.7, V11.7.64
Build cuda_11.7.r11.7/compiler.31294372_0

C:\Users\ASROCK>

pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu117

OK ?

Did it work?

Did it work?

Dear ptrblck,
I want to express my sincere gratitude for your incredible help with the installation of PyTorch with CUDA support! Your support and professionalism have been a true lifesaver for me. Thanks to your recommendations and advice, I was able to successfully set up all the necessary components, and now I am enjoying all the benefits that this powerful library provides.
Your knowledge and experience made the installation process easy and understandable. I was filled with joy when everything worked flawlessly. Now I can focus on my projects and reach new heights in deep learning.
I wish you all the best in your endeavors! May your work bring you joy and satisfaction, and may your company continue to thrive and reach new heights.
Once again, thank you for your help!

I want to share my epic saga related to the installation of PyTorch with CUDA support. Before diving into this important task, I decided to prepare thoroughly! I removed everything I could: even Python didn’t survive! Conda? I got rid of dozens of programs that happened to be in my way. I even cleaned the floors at home and swept all the streets in the yard — I thought cleanliness would help with the installation.
After such preparation, I went to the bathhouse to clear my mind and recharge my energy. And finally, I started the installation… and nothing worked! I was beginning to think it was some kind of karma.
But then I installed version 11.8, and lo and behold — everything worked! Now I can enjoy all the benefits of PyTorch and CUDA.
Thank you for your help and patience! Without you, I would have remained in a world of non-working installations and clean floors!
Best wishes, Torch Cuda )))

Great to hear it’s working now and thanks for the update!

Does 11.8 refer to python or cuda? are there certain version combinations that cause (known) problems?

specifically, I am experiencing the same issue that torch does not detect cuda with python 3.10.16 and cuda 12.8

Edit: nvm, upon reading the thread in detail and visiting the install page I realized on windows you cant just pip install torch and expect it to ship with cuda… so un- and re-installing with cu126 fixed it for me.