PyTorch 2.1.2+cu121 with CUDA 1201 (you have 2.1.0+cpu

I searched the comment box and nothing appeared:

So I guess I’ll ask the queries question:

Based on the issue below we have a variety of options:

They have been tested:
Python 3.9
Python 3.9.14
Python 10.6
Python 10.11

Next was it installed in a virtual environment (env,) :yes.

Has this problem happened numerous times: yes.

What was the issue of the problem, pytorch defaulting to CPU use.

Did you uninstall xformers: yes.

Did you recently update your driver on the GPU within the last week of January 24th 2024:yes

Was there any issues prior to that?
It’s hard to say as there was no issues(unless I reinstalled a different Python, as the driver was installed prior before that?
(I may consider down grading GPU driver just in case but I need to rule out other issues first)

As of now I have purged Python versions test above from the registry in windows, and cudatool kit, however the driver I have not downgraded yet.
Are there any other things I’m missing that need purging for clean install.

I placed this here in case other people encounter the same issue.

\folder\folder>python -m xformers.info
WARNING[XFORMERS]: xFormers can’t load C++/CUDA extensions. xFormers was built for:
PyTorch 2.1.2+cu121 with CUDA 1201 (you have 2.1.0+cpu)
Python 3.9.13 (you have 3.9.13)
Please reinstall xformers (see GitHub - facebookresearch/xformers: Hackable and optimized Transformers building blocks, supporting a composable construction.)
Memory-efficient attention, SwiGLU, sparse and more won’t be available.
Set XFORMERS_MORE_DETAILS=1 for more details
Unable to find python bindings at /usr/local/dcgm/bindings/python3. No data will be captured.
xFormers 0.0.23.post1
memory_efficient_attention.cutlassF: unavailable
memory_efficient_attention.cutlassB: unavailable
memory_efficient_attention.decoderF: unavailable
memory_efficient_attention.flshattF@0.0.0: unavailable
memory_efficient_attention.flshattB@0.0.0: unavailable
memory_efficient_attention.smallkF: unavailable
memory_efficient_attention.smallkB: unavailable
memory_efficient_attention.tritonflashattF: unavailable
memory_efficient_attention.tritonflashattB: unavailable
memory_efficient_attention.triton_splitKF: unavailable
indexing.scaled_index_addF: unavailable
indexing.scaled_index_addB: unavailable
indexing.index_select: unavailable
swiglu.dual_gemm_silu: unavailable
swiglu.gemm_fused_operand_sum: unavailable
swiglu.fused.p.cpp: not built
is_triton_available: False
pytorch.version: 2.1.0+cpu
pytorch.cuda: not available
dcgm_profiler: unavailable
build.info: available
build.cuda_version: 1201
build.python_version: 3.9.13
build.torch_version: 2.1.2+cu121
build.env.TORCH_CUDA_ARCH_LIST: 5.0+PTX 6.0 6.1 7.0 7.5 8.0+PTX 9.0
build.env.XFORMERS_BUILD_TYPE: Release
build.env.XFORMERS_ENABLE_DEBUG_ASSERTIONS: None
build.env.NVCC_FLAGS: None
build.env.XFORMERS_PACKAGE_FROM: wheel-v0.0.23.post1
source.privacy: open source

Z:\

Uninstall your PyTorch CPU binary and install 2.1.2+cu121 by following the install instructions.

1 Like