Is there any way, plan or timeline to support pytorch-cuda=12 for GH200?
Following Start Locally | PyTorch we just get PackagesNotFoundError
and pytorch-cuda=11 installs cuda 11.8 plus pytorch_cpu from conda_forge, which is not really usable
We can use the NGC containers or install it from source but conda/mamba is still the go to solution for many users…
The binaries are already being build in CI runs, e.g. here (scroll down, download the corresponding Python version, and pip install the wheel locally), and the CI job integration allowing a pip install from the nightly index is WIP e.g. here (latest update from 5 mins ago, so we are actively working on it).
Thanks a lot for the info and quick reply, Piotr!
It works for me and it is great to know that it should soon be available in the official channels. Awesome work!
ps.: it seems the wheel has grown significantly in size; is that just a product of the dev build?
The current wheel is shipping as a “large” wheel, which packages every dependency (including cuDNN, NCCL, cuBLAS, etc.) into the wheel’s lib folder (we’ve used the same workflow for nightly builds before). One of our next steps would be to use the CUDA PyPI dependencies (as is done for x86 Linux wheels) to build a “small” wheel again. In the end the same data will be downloaded but instead of a gigantic wheel, users would download different wheels instead.
I know this topic is closed, but I didn’t want to create a new one. How exactly do you install pytorch 2.3.1 with Cuda 12.1 on an aarch64 system? I also have a new system with a grace hopper gpu. It appears that it’s still not yet available on conda. Only cuda 11.8 is available, however, it still downloads the cpu version of pytorch. Pip install does the same. I’m trying to install for Python 3.11. Any help would be greatly appreciated. Thanks.