Pytorch inference on Raspberry pi?

I’m working on an object detection model to be deployed on Raspberry pi.
I usually stick to tflite for this use case. But I really want to use PyTorch if possible.

I’ll be running inference on Raspberry pi. Currently, what is the best way to go about doing this?

Will torchscript help?
Most discussions in the forums give an arm_whl file to setup PyTorch in raspberry pi, is this the best method - Installing pytorch on raspberry pi 3?

Any resource on this topic will be very much appreciated. Thanks.

1 Like

Hi Gautham,

in my opinion it is the best to install via whl file, otherwise it takes ages to be installed.

Might be a good idea to install pytorch >= 1.5.0, because you (already mentioned) torchscript. If it is not possible on raspi3 you can also go for flask and a wdsgi as backend.

A great pretrained transformer is the DETR object detector and you can already download it from facebooks github for pytorch.
Others like faster r-cnn might be a little to huge for a raspi3.

Hope it helps & have fun!

1 Like

Thanks @maltequast , I’ll proceed with the whl file and pytorch >= 1.5.0.

Regarding performance, is there any benchmarking data for pytorch vs tensorflow lite on raspberry pi? or in your opinion is the performance comparable, given that tflite was purpose-built for this?

Haven’t used tflite, but I know since pytorch 1.3 you can use pytorch on mobile devices / arm architectures for experimental usage.
In Pytorch 1.5 you can use dynamic quanization for better latency.
Here is the tutorial.
Also huge advantage of detr object detector is the model size. Check it out here.

1 Like

I put up PyTorch 1.6/TorchVision 0.7 wheels for Raspberry Pi OS 64bit in the hope that they’ll be useful.

4 Likes

Thanks @tom for sharing PyTorch’s wheel files.

Incidentally, I made new ones for a new course.
They are from PyTorch master, but if you have convs and your model is feeling slow, the added (incomplete) PR helps a lot.

5 Likes

PyTorch provides official wheels for arm64/aarch64 so if you install the newly released RPi OS 64-bit you can just run:

pip install torch torchvision torchaudio

I wrote up a tutorial on how to do efficient real time inference on a Raspberry Pi at Real Time Inference on Raspberry Pi 4 (30 fps!) — PyTorch Tutorials 1.10.1+cu102 documentation

2 Likes

Hey, thanks for the tutorial!

The last part which prints the actual classes is missing (it’s not a copy-paste from the text). Although it’s trivial to import the linked file, it would be nice to have a self-contained example there!

This part:

top = list(enumerate(output[0].softmax(dim=0)))
top.sort(key=lambda x: x[1], reverse=True)
for idx, val in top[:10]:
    print(f"{val.item()*100:.2f}% {classes[idx]}")

Will not know what is the object classes, obviously.

Also, your example video shows a real-time update of the classes on the terminal, and I can’t really reproduce that: using os.system(“clear”) or print(chr(27) + "[2J”) makes the screen blink a lot.

Do you have the whole example in a working condition in some repo?

PS: I am not sure if it’s because of Arducam, but I don’t get more than 7fps.