Does Pytorch support ARM processor(aarch64)?

Hello, everyone.

I want to run my pytorch codes on a board with ARM processor(aarch64).
The OS on that board is linux(Ubuntu 14.04).
I have tried so many things to build Pytorch on it but all failed.

Simple installation using Anaconda(or miniconda) has failed.
It seems Anaconda does not support aarch64 at all.
(original 86-64, arm7l, ppc64le binaries do not work with my board which has an aarch64 processor)

So I installed some dependencies using pip and somehow managed to reach to a point to build the pytorch from source, and the messages from setup.py tells me that configuring and generating process of CMAKE was successfully done.
But the build process stopped at some point with the following messages.



aarch64-linux-gnu-gcc: internal compiler error: Killed (program cc1plus)
aarch64-linux-gnu-gcc: internal compiler error: Killed (program cc1plus)
Please submit a full bug report,
with preprocessed source if appropriate.
Please submit a full bug report,
with preprocessed source if appropriate.
See <file:///usr/share/doc/gcc-5/README.Bugs> for instructions.
See <file:///usr/share/doc/gcc-5/README.Bugs> for instructions.
aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I/home/firefly/Downloads/pytorch-master -I/home/firefly/Downloads/pytorch-master/torch/csrc -I/home/firefly/Downloads/pytorch-master/torch/lib/tmp_install/include -I/home/firefly/Downloads/pytorch-master/torch/lib/tmp_install/include/TH -I/home/firefly/Downloads/pytorch-master/torch/lib/tmp_install/include/THPP -I/home/firefly/Downloads/pytorch-master/torch/lib/tmp_install/include/THNN -I/usr/lib/python2.7/dist-packages/numpy/core/include -I/usr/include/python2.7 -c torch/csrc/autograd/functions/batch_normalization.cpp -o build/temp.linux-aarch64-2.7/torch/csrc/autograd/functions/batch_normalization.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY
error: command ‘aarch64-linux-gnu-gcc’ failed with exit status 4
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++

So the question is, does pytorch support aarch64 processor?
And if so, it would be very much appreciated if anyone can tell me where I am doing wrong, or an easier way to build it.

Thank you

p.s.: I failed to install the MKL library on that board too. So I instead installed openBLAS. I heard that performance gap btw. MKL and openBLAS are huge and runtime of the algorithm is very important in my project. Is there any suggestion or advice to install MKL on aarch64 processor? Thanks!

1 Like

it does support aarch64 processor. Someone from NVIDIA wrote how to build it on their jetson platform:

4 Likes

PyTorch officially has provided Linux AArch64 (64-bit Arm) wheels for a while now (at least since the 1.13 release). These work on Arm servers (like AWS Graviton and Ampere) and other 64-bit Arm running Linux distros out-of-the-box.

These don’t support CUDA though, am I correct? I need CUDA support so I have been building from source, at much pain to do so.