Intel A770 GPU + Debian13 -- XPU available: False

hello, I’m struggling at correctly installing PyTorch and xpu. here is my config

$uname -ar
Linux thor 6.12.57+deb13-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.12.57-1 (2025-11-05) x86_64 GNU/Linux
$ python --version
Python 3.14.0
$ which python
/home/fred/.pyenv/shims/python

then I install PyTorch with

$pip install torch==2.9.1+xpu torchvision==0.23.1+xpu torchaudio==2.9.1+xpu intel-cmplr-lib-rt intel-cmplr-lib-ur intel-cmplr-lic-rt intel-sycl-rt pytorch-triton-xpu tcmlib umf intel-pti --index-url https://download.pytorch.org/whl/xpu

as (very well) explained here

then I checked

>>> import torch
>>> print(f"XPU available: {torch.xpu.is_available()}")
/media/SSD02/venv/lib/python3.14/site-packages/torch/xpu/__init__.py:61: UserWarning: XPU device count is zero! (Triggered internally at /pytorch/c10/xpu/XPUFunctions.cpp:115.)
  return torch._C._xpu_getDeviceCount()
XPU available: False
>>> print(f"PyTorch version: {torch.__version__}")
PyTorch version: 2.9.1+xpu
>>> print(f"XPU compiled: {torch._C._xpu_getDeviceCount is not None}")
XPU compiled: True

What did I miss ? why it is not able to interact with xpu ?? thank you very much :slight_smile:

EDIT : from a fresh and clean install of Debian 13 + Liquorix the problem is still the same… XPU available: False

$ uname -ar
Linux THOR 6.17.12-1-liquorix-amd64 #1 ZEN SMP PREEMPT_DYNAMIC liquorix 6.17-14.1~trixie (2025-12-12 x86_64 GNU/Linux

and

$ pip install torch==2.9.1+xpu torchvision==0.24.1+xpu torchaudio==2.9.1+xpu intel-cmplr-lib-rt intel-cmplr-lib-ur intel-cmplr-lic-rt intel-sycl-rt pytorch-triton-xpu tcmlib umf intel-pti --index-url https://download.pytorch.org/whl/xpu

-_-,

1 Like

Hi Fred!

I don’t see anything in your post that suggests that the following is relevant to you, but I have
had a similar issue caused by an out-of-date libstdc++.so.

Try running:

SYCL_UR_TRACE=1 python -c "import torch; print(torch.xpu.is_available())"

Do you get a GLIBCXX version error similar to this?

<LOADER>[INFO]: failed to load adapter 'libur_adapter_level_zero.so.0' with error: <path_to_pytorch_install>/torch/lib/../../../../libstdc++.so.6: version `GLIBCXX_3.4.32' not found (required by /lib/x86_64-linux-gnu/libze_loader.so.1)

(Note that even when pytorch+xpu is working other errors occur, so my comments are
specific to the GLIBCXX error.)

If this is the case for you, how to fix it?

In my case, I am using conda. My underlying linux (24.04.3 LTS) has:

/usr/lib/x86_64-linux-gnu/libstdc++.so.6 -> libstdc++.so.6.0.33

which contains the sufficiently recent GLIBCXX, while my conda environment into which I
installed pytorch+xpu had:

<path_to_conda_env>/lib/libstdc++.so.6 -> libstdc++.so.6.0.29

which is not recent enough.

My fix was to copy my main linux libstdc++.so.6.0.33 to the conda environment and re-soft-link
libstdc++.so.6.

(I attribute this to a missing dependency in the pytorch+xpu pip wheel and the fact that I can’t
figure out a clean way to tell conda to use an up-to-date libstdc++.so. As an aside, pytorch+xpu
isn’t as vigorously supported as one might hope.)

Good luck!

K. Frank

hello Frank

thank you for your help, here is the result for your first question

$ SYCL_UR_TRACE=1 python -c "import torch; print(torch.xpu.is_available())"
<LOADER>[INFO]: The adapter 'libur_adapter_level_zero_v2.so.0' is skipped because UR_LOADER_USE_LEVEL_ZERO_V2 or SYCL_UR_USE_LEVEL_ZERO_V2 is not set.
<LOADER>[INFO]: failed to load adapter 'libur_adapter_cuda.so.0' with error: libur_adapter_cuda.so.0: cannot open shared object file: No such file or directory
<LOADER>[INFO]: failed to load adapter '/home/fred/.pyenv/versions/3.14.2/lib/python3.14/site-packages/torch/lib/../../../../../lib/libur_adapter_cuda.so.0' with error: /home/fred/.pyenv/versions/3.14.2/lib/python3.14/site-packages/torch/lib/../../../../../lib/libur_adapter_cuda.so.0: cannot open shared object file: No such file or directory
<LOADER>[INFO]: failed to load adapter 'libur_adapter_hip.so.0' with error: libur_adapter_hip.so.0: cannot open shared object file: No such file or directory
<LOADER>[INFO]: failed to load adapter '/home/fred/.pyenv/versions/3.14.2/lib/python3.14/site-packages/torch/lib/../../../../../lib/libur_adapter_hip.so.0' with error: /home/fred/.pyenv/versions/3.14.2/lib/python3.14/site-packages/torch/lib/../../../../../lib/libur_adapter_hip.so.0: cannot open shared object file: No such file or directory
<LOADER>[INFO]: loaded adapter 0x0x56019d4e8b10 (libur_adapter_level_zero.so.0) from /home/fred/.pyenv/versions/3.14.2/lib/python3.14/site-packages/torch/lib/../../../../../lib/../lib/libur_adapter_level_zero.so.0
<LOADER>[INFO]: failed to load adapter 'libur_adapter_native_cpu.so.0' with error: libur_adapter_native_cpu.so.0: cannot open shared object file: No such file or directory
<LOADER>[INFO]: failed to load adapter '/home/fred/.pyenv/versions/3.14.2/lib/python3.14/site-packages/torch/lib/../../../../../lib/libur_adapter_native_cpu.so.0' with error: /home/fred/.pyenv/versions/3.14.2/lib/python3.14/site-packages/torch/lib/../../../../../lib/libur_adapter_native_cpu.so.0: cannot open shared object file: No such file or directory
<LOADER>[INFO]: loaded adapter 0x0x56019d4eb990 (libur_adapter_opencl.so.0) from /home/fred/.pyenv/versions/3.14.2/lib/python3.14/site-packages/torch/lib/../../../../../lib/../lib/libur_adapter_opencl.so.0
/home/fred/.pyenv/versions/3.14.2/lib/python3.14/site-packages/torch/xpu/__init__.py:61: UserWarning: XPU device count is zero! (Triggered internally at /pytorch/c10/xpu/XPUFunctions.cpp:115.)
  return torch._C._xpu_getDeviceCount()

ok so it seems that libze is supposed to be on its version 2., but I have

$ sudo apt-cache policy libze1
libze1:
  Installé : 1.20.6-1
  Candidat : 1.20.6-1
 Table de version :
 *** 1.20.6-1 500
        500 http://deb.debian.org/debian trixie/main amd64 Packages
        100 /var/lib/dpkg/status
$ sudo apt search libze-dev
libze-dev/stable,now 1.20.6-1 amd64

hu ? and of course no libze2 to be found :frowning:

do I need to build my own level-zero package from the source page itself ?

thanks for any help you can provide !

EDIT : after compilation and installation from the source page of level-zero package, problem remains… I have exactly the same output as mentioned here. Unfortunately no solution has been proposed or exposed.

Hi Fred!

Okay, your error is different than the one I had been seeing. I don’t understand the xpu
stuff well enough to interpret the errors you are getting.

However, I did just successfully install the latest (stable?) xpu version and I can tell you
what I did.

I am running Ubuntu 24.04.3 LTS and have miniconda installed. I don’t remember whether
I installed any special intel or arc graphics drivers or whether they came already installed
on my machine.

My processor (from Settings / About) is “Intel Core Ultra 9 185H x 22”.

My graphics devices are reported as:

$ clinfo | grep "Device Name"
  Device Name                                     Intel(R) Arc(TM) Graphics
  Device Name                                     NVIDIA RTX 3000 Ada Generation Laptop GPU
    Device Name                                   Intel(R) Arc(TM) Graphics
    Device Name                                   Intel(R) Arc(TM) Graphics
    Device Name                                   Intel(R) Arc(TM) Graphics

$ sudo intel-gpu-top

intel-gpu-top: Intel Meteorlake (Gen12) @ /dev/dri/card1

(As and aside, miniconda environment creation seems to have been updated so that the
new conda environment’s libstdc++.so.6 is no longer out of date – that having been the
source of the error I mentioned in my earlier post.)

Here are my steps:

Create new conda environment with latest (3.14) python and activate new environment:

$ conda create -n 2_9_1_xpu python=3.14
...
$ conda activate 2_9_1_xpu

Install xpu version of pytorch:

(2_9_1_xpu) $ pip3 install torch torchvision --index-url https://download.pytorch.org/whl/xpu

Note, this is just the command given in the “install grid” on the main pytorch.org page, but with
“xpu” swapped in for “cu130” (or “cpu”). Also, I think that torchaudio had been deprecated, so
I didn’t try to install it.

Now I have the latest xpu version in my new conda environment and it passes a quick “smoke
test”:

(2_9_1_xpu) $ python
Python 3.14.2 | packaged by Anaconda, Inc. | (main, Dec 19 2025, 11:49:32) [GCC 14.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.__version__
'2.9.1+xpu'
>>> torch.xpu.is_available()
True
>>> t = torch.randn (5, 5, device = 'xpu')
>>> s = t @ t
>>> s
tensor([[ 1.5557, -1.1846,  0.7136,  0.5232,  0.7010],
        [ 1.6717,  3.4394, -0.3859, -3.5845,  0.2741],
        [ 0.3332,  0.2398,  5.6330, -3.4103, -1.0559],
        [-1.7403, -2.4121, -1.6182,  6.8576, -0.2425],
        [ 0.9398,  1.1792, -1.2883, -0.7954,  1.5411]], device='xpu:0')
>>>

So xpu pytorch works – at least on my machine. I don’t really understand the details, but
maybe what I did might give you some hints about how to get things working on your setup.

Good luck!

K. Frank

Hello Franck,

Thank you very much for your reply. I think my problem is coming from the fact that there is no integration of intel tools with Debian 13 so far, and so even

$ clinfo | grep "Device Name"

output… nothing. so my guess is pytorch is working well, but it can’t locate all required intel components to access the GPU.

Franck, what is the output of

$uname -ar

and

$sudo apt-cache policy libze1

on your ‘working’ machine ?

Thank you !

Frédéric

EDIT : Problem solved ! as previously guessed all essential Intel Compute Runtime components were missing, without any warning or errors potentially helping to find such problem

So I followed the Intel Compute Runtime procedure, downloaded all *.deb files from the 25.44.36015.8 release and installed it. I added my user name in the ‘render’ group and ‘voilà’ :slight_smile:

$ clinfo | grep "Device Name"
  Device Name                                     Intel(R) Arc(TM) A770 Graphics
    Device Name                                   Intel(R) Arc(TM) A770 Graphics
    Device Name                                   Intel(R) Arc(TM) A770 Graphics
    Device Name                                   Intel(R) Arc(TM) A770 Graphics
$ python
Python 3.14.2 (main, Dec 18 2025, 14:49:47) [GCC 14.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> print(torch.xpu.is_available())
True

so for anyone with a clean install of Debian 13, first step is to install all this stuff and then to install pytorch with the -xpu extension. As a reminder here is my working configuration:

$uname -ar
Linux THOR 6.18.2-1-liquorix-amd64 #1 ZEN SMP PREEMPT_DYNAMIC liquorix 6.18-1.1~trixie (2025-12-24) x86_64 GNU/Linux
$pip install torch==2.9.1+xpu torchvision==0.24.1+xpu torchaudio==2.9.1+xpu intel-cmplr-lib-rt intel-cmplr-lib-ur intel-cmplr-lic-rt intel-sycl-rt pytorch-triton-xpu tcmlib umf intel-pti --index-url https://download.pytorch.org/whl/xpu

and to test it:

>>> print(f"PyTorch version: {torch.__version__}")
PyTorch version: 2.9.1+xpu
>>> print(f"XPU available: {torch.xpu.is_available()}")
XPU available: True
>>> print(f"Device count: {torch.xpu.device_count()}")
Device count: 1
>>> print(f"Device name: {torch.xpu.get_device_name(0)}")
Device name: Intel(R) Arc(TM) A770 Graphics
>>> x = torch.randn(1000, 1000, device='xpu')
>>> y = torch.randn(1000, 1000, device='xpu')
>>> result = torch.mm(x, y)  # Matrix multiplication on Intel GPU
>>> print(f"✅ GPU computation successful on: {result.device}")
✅ GPU computation successful on: xpu:0

thank you so much for your help :slight_smile:

1 Like