Latest cuda toolkit release 11.7. is it compatible with pytorch?


I just saw there is a new release 11.7 of the Cuda toolkit. Is it possible to build pytorch with this?
I am hoping that it will solve issues with my Gigabyte RTX3080, which does not perform better than my GTX 1650 Ti notebook card, I suspect because I have used prebuilt binaries for pytorch. If 11.7 is not supported, what version of cuda and cudnn do you recommend for an RTX 3080. Thanks!

Best, JZ


What are the issues you are experiencing with the 3080? It’s a bit tough to say what the performance impact would be without knowing the workload and the versions of the dependent libraries (e.g., cuBLAS, cuDNN) that are being used.

However, 11.7 should work fine with a recent version of PyTorch if you are building from source.

so I am trying to build pytorch from source with cuda 11.7 on linux, however i couldn’t find exact magma cuda for 11.7.

# CUDA only: Add LAPACK support for the GPU if needed

conda install -c pytorch magma-cuda110 # or the magma-cuda* that matches your CUDA version from

any info about this?

You could rebuild MAGMA from source for 11.7 or wait until we publish them.


@jayz Were you able to install PyTorch from the source for CUDA toolkit 11.7?
@ptrblck I am getting the below when I try installing from source:

CMake Error at torch/lib/libshm/CMakeLists.txt:36 (add_executable):
  The install of the torch_shm_manager target requires changing an RPATH from
  the build tree, but this is not supported with the Ninja generator unless
  on an ELF-based or XCOFF-based platform.  The
  CMAKE_BUILD_WITH_INSTALL_RPATH variable may be set to avoid this relinking

Tried installing the latest version of CMake and also setting the variable CMAKE_BUILD_WITH_INSTALL_RPATH=ON (Not sure if this is right).

Can anybody help me with this?

I haven’t seen this issue before, but this similar post points to a cmake version which might be too old (the answer mentions to install cmake but it was already installed as it’s raising the error, so maybe it was updated instead).

@ptrblck Thank you for your instant response!
I did try removing the CMake that’s prebuilt(And I guess this was not an old version: 3.22.x) and installing again using apt earlier. But, this didn’t work.
So, I tried deleting the PyTorch repo itself, cloned it once again, then started the whole process, and this time it worked!
I guess this didn’t work earlier due to CMake cache? Not sure what happened here.

Weird issue, but you might be right that “something” was still in the cache. Running python clean should wipe every intermediate build files and should also work, but it’s good to hear it’s working now.

Nope, i went with the cuda 11.6 binaries in the end, did not try building 11.7 from source.

I am trying to fix

AssertionError: Torch not compiled with CUDA enabled

I attemped many solution in github and stackflow. It still doesn’t work.

any chance someone has attempted this solution? I am facing the similar issue with RTX 3090 and CUDA 11.7 and Windows 11.

These days I am working with the newest version of pytorch on my Win11 Laptop with RTX3070, but it always occurs some fatal problem with cuda 11.7. It seems that not only I am suffering from this problem. Still waiting for a solution. Running ML on CPU is too slow.

I assume you are trying to build PyTorch from source using CUDA 11.7 as the binaries are not released yet. If so, use the supported pip wheels or conda binaries with CUDA 11.3 or 11.6.

Currently still not support :frowning:

>>> import torch
>>> torch.cuda.is_available()
>>> torch.zeros(1).cuda()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Users\myxzlpltk\anaconda3\envs\tf-gpu\lib\site-packages\torch\cuda\", line 211, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

If you insist, you can downgrade cuda-toolkit to 11.6

1 Like