PyTorch 2.0.1 Not Recognizing CUDA 11.4 Despite Correct Environment Setup

Certainly! Here’s a revised message for the PyTorch community that incorporates your latest attempts and findings:

Hello PyTorch Community,

I’m encountering an issue where PyTorch (torch.cuda.is_available()) returns False, indicating it does not recognize CUDA on a university server equipped with NVIDIA GPUs, running CUDA 11.4. I have verified CUDA installation with nvidia-smi, which confirms CUDA 11.4 is correctly installed.

Environment Details:

  • CUDA Version:11.4 (verified with nvidia-smi)
  • PyTorch Version: 2.0.1 (installed using Conda from the pytorch channel)
  • OS: Linux ubuntu
  • Python Version: 3.10 (within a Conda environment)

Steps Taken:

  1. Installed PyTorch 2.0.1 using Conda without specifying cudatoolkit version due to initial PackagesNotFoundError for cudatoolkit=11.4.
  2. Set CUDA-related environment variables correctly:
    • CUDA_HOME=/usr/local/cuda-11.4
    • LD_LIBRARY_PATH and PATH include CUDA directories.
  3. Verified environment variables (CUDA_HOME, LD_LIBRARY_PATH, PATH) are correctly set.
  4. Restarted the terminal and activated the medsam environment again to ensure changes took effect.

Despite these steps, running torch.cuda.is_available() in Python still returns False, suggesting PyTorch does not recognize the CUDA installation.


  1. Are there known compatibility issues with PyTorch 2.0.1 and CUDA 11.4?
  2. Could the issue be related to how PyTorch was installed or the environment setup?
  3. Are there additional steps I should take to troubleshoot or resolve this issue?

Any advice or suggestions would be greatly appreciated. I’m considering building PyTorch from source as a next step but wanted to reach out for any insights or solutions that might not require this.

Thank you!

This would mean you’ve installed the CPU-only binary and won’t be able to use your GPU.

This doesn’t matter as the PyTorch binaries ship with their own CUDA dependencies. Your locally installed CUDA toolkit will be used if you build PyTorch from source or a custom CUDA extension.

No, PyTorch 2.0.1 will work with CUDA 11.4, but you would need to install it from source with CUDA 11.4 as we’ve built the binaries with CUDA 11.7 and 11.8 at that time.

Hello PyTorch Community,
I want to install pytorch 2.0.1 using CDA 11.4 what commandline should i use in wheel since the builtin for that version is CUDA 11.7 and 11.8?? if i change the cu117 in the commandline below become cu114, the result ERROR: Could not find a version that satisfies the requirement torch==2.0.1 (fro m versions: none)
ERROR: No matching distribution found for torch==2.0.1

pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 torchaudio==0.13.1 --extra-index-url

You could build PyTorch from source with CUDA 11.4 since no binaries were available for this combination, if you really need to use this old CUDA version.

I am sorry, I am very begineer, could you explain more by how to do “build PyTorch from source”. Thankyou

This section of the readme describes how to build PyTorch from source.