from transformers import AutoModelWithLMHead
model = AutoModelWithLMHead.from_pretrained('fine-tuned-DialoGPT')
model.generate(...)
This my code, and I’m running this on a 64-bit Raspberry Pi 4B. It crashes when model.generate(...) is executed, and only prints out Illegal instruction. It works fine on my computer though.
What may be causing this issue?
Update: I’ve tried using https://github.com/KumaTea/pytorch-aarch64 and I got a response with eatures mathemat mathemat mathemat mathemat mathemat (repeat this 1000 times)........ It took around 5 minutes to generate that response.
I still don’t know how to use pytorch on a 64-bit RasPi 4B. Help is appreciated.