"Illegal instruction" using AutoModelWithLMHead on 64bit Raspberry Pi 4B

from transformers import AutoModelWithLMHead
model = AutoModelWithLMHead.from_pretrained('fine-tuned-DialoGPT')

This my code, and I’m running this on a 64-bit Raspberry Pi 4B. It crashes when model.generate(...) is executed, and only prints out Illegal instruction. It works fine on my computer though.
What may be causing this issue?

Update: I’ve tried using https://github.com/KumaTea/pytorch-aarch64 and I got a response with eatures mathemat mathemat mathemat mathemat mathemat (repeat this 1000 times)........ It took around 5 minutes to generate that response.

I still don’t know how to use pytorch on a 64-bit RasPi 4B. Help is appreciated.

Not an expert but maybe these instructions will help you get started https://github.com/pytorch/tutorials/pull/1821


Turns out it was an issue with 64-bit Raspberry Pi OS beta. I’ve updated it to the official release and it miraculously worked.

1 Like