Installing pytorch on raspberry pi 3

Any easy/less painful ways of installing pytorch on a raspberry pi than python setup.py build with the source?

I’m not too sure, have tried using pip3, and keep getting the ‘not supported on this platform’ error. I have got 64bit arch software, and the raspi 3 is 64 arch hardware, so I am as baffled as you.

Did you manage to get it working?

You could try to build from source, as I’m not sure if someone published arm binaries of PyTorch.
Even though the RPi uses a 64bit CPU, the architecture differs from the official binaries (x86).

I’ve been trying to build from source for the past month, it’s a very painful experience, mainly because of raspi not releasing 64 bit software, I have been using ubuntu server 64bit for aarm but, after a month of constant trial and error, it’s becoming a project I’m tiring of, and was hoping someone might have had more success.
Still, I have reduced the amount of errors I am having by almost half, so I guess I’m doing something right :slight_smile:

Oh that doesn’t sound like a good experience. :confused:
What kind of errors do you get?

Good question, I am currently re-installing my backup image of the last stable state of the SD card, and then shall compile again, and record the feedback for you.

Okay so far I have this, right at the beginning

[ 0%] Building C object confu-deps/pthreadpool/CMakeFiles/pthreadpool.dir/src/threadpool-pthreads.c.o
/home/ubuntu/pytorch/third_party/QNNPACK/deps/clog/src/clog.c: In function ‘clog_vlog_fatal’:
/home/ubuntu/pytorch/third_party/QNNPACK/deps/clog/src/clog.c:120:4: warning: ignoring return value of ‘write’, declared with attribute warn_unused_result [-Wunused-result]
write(STDERR_FILENO, out_buffer, prefix_chars + format_chars + CLOG_SUFFIX_LENGTH);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/ubuntu/pytorch/third_party/QNNPACK/deps/clog/src/clog.c: In function ‘clog_vlog_error’:
/home/ubuntu/pytorch/third_party/QNNPACK/deps/clog/src/clog.c:196:4: warning: ignoring return value of ‘write’, declared with attribute warn_unused_result [-Wunused-result]
write(STDERR_FILENO, out_buffer, prefix_chars + format_chars + CLOG_SUFFIX_LENGTH);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/ubuntu/pytorch/third_party/QNNPACK/deps/clog/src/clog.c: In function ‘clog_vlog_warning’:
/home/ubuntu/pytorch/third_party/QNNPACK/deps/clog/src/clog.c:272:4: warning: ignoring return value of ‘write’, declared with attribute warn_unused_result [-Wunused-result]
write(STDERR_FILENO, out_buffer, prefix_chars + format_chars + CLOG_SUFFIX_LENGTH);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/ubuntu/pytorch/third_party/QNNPACK/deps/clog/src/clog.c: In function ‘clog_vlog_info’:
/home/ubuntu/pytorch/third_party/QNNPACK/deps/clog/src/clog.c:348:4: warning: ignoring return value of ‘write’, declared with attribute warn_unused_result [-Wunused-result]
write(STDOUT_FILENO, out_buffer, prefix_chars + format_chars + CLOG_SUFFIX_LENGTH);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/ubuntu/pytorch/third_party/QNNPACK/deps/clog/src/clog.c: In function ‘clog_vlog_debug’:
/home/ubuntu/pytorch/third_party/QNNPACK/deps/clog/src/clog.c:424:4: warning: ignoring return value of ‘write’, declared with attribute warn_unused_result [-Wunused-result]
write(STDOUT_FILENO, out_buffer, prefix_chars + format_chars + CLOG_SUFFIX_LENGTH);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Which has confused me a little, as I set environments as follows:

export NO_CUDA=1
export NO_DISTRIBUTED=0
export NO_MKLDNN=1
export NO_NNPACK=1
export NO_QNNPACK=1

Okay, it’s 48 hours in to compiling, and at 92%, but I’m getting many warnings about deprecation, and the dangerous use of tmpnam, and that I should be using mkstemp, but these are things I have no control over, correct?

This is the error I recieved during the night

[ 92%] Building CXX object test_api/CMakeFiles/test_api.dir/dataloader.cpp.o
caffe2/CMakeFiles/op_registration_test.dir/build.make:62: recipe for target ‘caffe2/CMakeFiles/op_registration_test.dir/__/aten/src/ATen/core/op_registration/op_registration_test.cpp.o’ failed
CMakeFiles/Makefile2:3922: recipe for target ‘caffe2/CMakeFiles/op_registration_test.dir/all’ failed

[ 93%] Built target test_api
Makefile:140: recipe for target ‘all’ failed
Building wheel torch-1.3.0a0+c2549cb
– Building version 1.3.0a0+c2549cb
cmake -DBUILD_PYTHON=True -DBUILD_TEST=True -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/home/ubuntu/pytorch/torch -DCMAKE_PREFIX_PATH=/usr/lib/python3/dist-packages -DNUMPY_INCLUDE_DIR=/home/ubuntu/.local/lib/python3.6/site-packages/numpy/core/include -DPYTHON_EXECUTABLE=/usr/bin/python3 -DPYTHON_INCLUDE_DIR=/usr/include/python3.6m -DPYTHON_LIBRARY=/usr/lib/libpython3.6m.so.1.0 -DTORCH_BUILD_VERSION=1.3.0a0+c2549cb -DUSE_CUDA=False -DUSE_DISTRIBUTED=True -DUSE_NUMPY=True /home/ubuntu/pytorch
cmake --build . --target install --config Release – -j 4

1 Like

hi @Howlshigh have you managed to get it done?

No, unfortunately not, the deprecation warnings became to much. I will try it again later.

I published ARM64 binaries of PyTorch compiled on the Raspberry (I actually compiled 1.4, too, just didn’t upload yet). You would need a 64bit distribution (eg Debian for the Raspberry Pi 3) or a 64 bit kernel from the Raspberry Inc and arm64 chroot.

There also are ARM32 binaries from @LeviViana . At least in September, some things like JIT tracing didn’t work when I built on ARM32, Levi would know if he fixed it or it’s still open.

Best regards

Thomas

2 Likes

I have been using Ubuntu Server 18.10 ARM64, and was constantly warned over deprecations within the build. I find a testament to the lack of foresight in Raspberry Pi that they refuse to make an ARM64 architecture software, because of earlier 32 bit Pi models still in use.
But in saying that, it does provide a challenge which is half the fun, but double the frustration.

Indeed I did some quick fixes last year to get torch 1.3 compiled on RPI. However, all the unit tests didn’t pass (this is why I didn’t event try a PR or something), but the main commonly used torch functions are working just as they should. I can make some digging and find out what I did last year, but I got no time for this right now :frowning:, moreover, I think it doesn’t matter that much.

I’ll probably do the same for torch 1.4 in the coming weeks or months.

I just compiled torch 1.4, you’ll find the wheels here:

https://wintics-opensource.s3.eu-west-3.amazonaws.com/torch-1.4.0a0%2B7963631-cp37-cp37m-linux_armv7l.whl

Have fun !

3 Likes

Thanks, I will give them a shot later :slight_smile:

Could you provide build instructions? I get some nasty compilation errors (https://github.com/pytorch/pytorch/issues/35049)

did the wheels by LeviViana work?

LeviViana how did you compile?
what options did you set?

I hope this post will help you out !

Just in case its helpful, I’ve also compiled a version of torch 1.4 and torchvision 0.5 for the Raspberry Pi (Tested with an RPi 3B). These wheels have the NEON optimisations enabled, and can support the new torch jit functionality also.

You can find the wheels here:

and the docker images I used to compile these wheels here:

1 Like