NVIDIA DGX Spark support

Does PyTorch support the newly released NVIDIA DGX Spark ?

Yes our latest stable and nightly binaries support DGCSpark. You can install them via pip3 install torch torchvision --index-url ``https://download.pytorch.org/whl/cu130``.

Can I install with uv?

I believe so but I have not tried it out. Let me know if it works.

Hi, I tried but have issues: (spark_test) asi@spark-9157:~$ pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu130.
Looking in indexes: https://download.pytorch.org/whl/cu130.
ERROR: Could not find a version that satisfies the requirement torch (from versions: none)
ERROR: No matching distribution found for torch

Please kindly help.

Which Python version are you using?

I installed conda following this: Python on NVIDIA DGX Spark: First Impressions | Anaconda

python

Python 3.13.9 | packaged by Anaconda, Inc. | (main, Oct 21 2025, 19:17:31) [GCC 11.2.0] on linux

Type “help”, “copyright”, “credits” or “license” for more information.

>>> import os

>>> os.cpu_count()

20

I tried version 3.12→3.7. All gave similar messages. 3.7 got one more warning. Should I not use Conda?

(spark_py37) asi@spark-9157:~$ !136

pip3 install torch torchvision --index-url ``https://download.pytorch.org/whl/cu130\`\`.

Looking in indexes: https://download.pytorch.org/whl/cu130.

ERROR: Could not find a version that satisfies the requirement torch (from versions: none)

ERROR: No matching distribution found for torch

WARNING: There was an error checking the latest version of pip.

I cannot reproduce any issues unfortunately and the install instructions just for on my system:

python --version
Python 3.12.3
...
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu130
Looking in indexes: https://download.pytorch.org/whl/cu130
Collecting torch
  Downloading https://download.pytorch.org/whl/cu130/torch-2.9.0%2Bcu130-cp312-cp312-manylinux_2_28_aarch64.whl.metadata (30 kB)
Collecting torchvision
  Downloading https://download.pytorch.org/whl/cu130/torchvision-0.24.0-cp312-cp312-manylinux_2_28_aarch64.whl.metadata (5.9 kB)
...
Collecting nvidia-cuda-nvrtc==13.0.48 (from torch)
  Downloading https://download.pytorch.org/whl/cu130/nvidia_cuda_nvrtc-13.0.48-py3-none-manylinux2014_aarch64.manylinux_2_17_aarch64.whl.metadata (1.7 kB)
Collecting nvidia-cuda-runtime==13.0.48 (from torch)
  Downloading https://download.pytorch.org/whl/cu130/nvidia_cuda_runtime-13.0.48-py3-none-manylinux2014_aarch64.manylinux_2_17_aarch64.whl.metadata (1.7 kB)
Collecting nvidia-cuda-cupti==13.0.48 (from torch)
  Downloading https://download.pytorch.org/whl/cu130/nvidia_cuda_cupti-13.0.48-py3-none-manylinux_2_25_aarch64.whl.metadata (1.7 kB)
Collecting nvidia-cudnn-cu13==9.13.0.50 (from torch)
...
python -c "import torch; print(torch.__version__); print(torch.cuda.get_device_properties(0)); print(torch.randn(1).cuda())"
2.9.0+cu130
_CudaDeviceProperties(name='NVIDIA GB10', major=12, minor=1, total_memory=122484MB, multi_processor_count=48, ...)
tensor([-1.1308], device='cuda:0')

I don’t know what’s causing the install issues on your system, but you could try to download the file manually e.g. from here for Python==3.12 and install it manually afterwards. Maybe a better error message will be raised.

Hi all

i have compiled PyTorch 2.9.0 nightly with sm_121 and Cuda 13.0 support on my DGX and it went almost smoothly (i also had to recompile torchvision and triton etc.)
What i try to do is: running FramePack inference on GB10 (DGX Spark FE) to find out about inference performance (i know there is other ways to test).
What i stumble across is: inside PyTorch sources there is flash_attn included which refers sm80.cu files, which happen to appear to create a .so (i guess lazy build depend). That of course then cause a hard error on the Spark:

FATAL: kernel fmha_cutlassF_f16_aligned_64x128_rf_sm80 is for sm80-sm100, but was built for sm121

is this a known issue? or even better, do you know about a solution?

Thanks in advance

Andreas

P.S.: almost forgot about: using Py3.12 in a docker based on NGC nvcr.io/nvidia/pytorch:25.09-py3

Claude asked me to remove the backticks (may be artifact of copy & paste?). Then it works.

“1. Remove the backticks from the URL

Your command has backticks (`) around the URL which shouldn’t be there. The command should be:

bash

pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu130

Thank you for your help.

Ah OK, I assumed these backticks were just copy/paste issues in your post.
Where did you find this broken command?

If this is the wrong thread, feel free to move my question

I’m unsure what library you are compiling from source and why as our PyTorch binaries already support DGCSpark. Assuming it’s a custom lib or a submodule causing issues I would recommend creating a new topic as I assume this one will continue discussing installation and setup issues for our binaries.

Pytorch in latest rev (even nightly) tells: supporting SM 8.0 - 12.0 and GB10 is 12.1 source does support 12.1 (pytorch part) and compilation worked, but as mentioned cutlass and flash_attn submods do throw execptions on runtime

I created a new topic:

Ignore the warning as it’s wrong since sm_121 is SASS binary compatible with sm_120. We have a PR for it already open but it seems to have missed the branch cut for 2.9.0.

Ah ic. This in case still prevents from GB10 using native access right? Also it is only warning? Profiler shows usage of 100%CPU instead of GPU while doing inference, but i am going to close thread here and try your proposal of cuDNN backend in other thread.