Pytorch inference on Raspberry pi?

I’m working on an object detection model to be deployed on Raspberry pi.
I usually stick to tflite for this use case. But I really want to use PyTorch if possible.

I’ll be running inference on Raspberry pi. Currently, what is the best way to go about doing this?

Will torchscript help?
Most discussions in the forums give an arm_whl file to setup PyTorch in raspberry pi, is this the best method - Installing pytorch on raspberry pi 3?

Any resource on this topic will be very much appreciated. Thanks.

Hi Gautham,

in my opinion it is the best to install via whl file, otherwise it takes ages to be installed.

Might be a good idea to install pytorch >= 1.5.0, because you (already mentioned) torchscript. If it is not possible on raspi3 you can also go for flask and a wdsgi as backend.

A great pretrained transformer is the DETR object detector and you can already download it from facebooks github for pytorch.
Others like faster r-cnn might be a little to huge for a raspi3.

Hope it helps & have fun!

1 Like

Thanks @maltequast , I’ll proceed with the whl file and pytorch >= 1.5.0.

Regarding performance, is there any benchmarking data for pytorch vs tensorflow lite on raspberry pi? or in your opinion is the performance comparable, given that tflite was purpose-built for this?

Haven’t used tflite, but I know since pytorch 1.3 you can use pytorch on mobile devices / arm architectures for experimental usage.
In Pytorch 1.5 you can use dynamic quanization for better latency.
Here is the tutorial.
Also huge advantage of detr object detector is the model size. Check it out here.


I put up PyTorch 1.6/TorchVision 0.7 wheels for Raspberry Pi OS 64bit in the hope that they’ll be useful.

1 Like