Incompatible CPU Errors for lib-torch on a Linux Server

So I’ve built my PyTorch code on my local machine and hooked it up to an API so the model can process data over the web. I used the cpp+ API to access the model library. Everything was working fine, however, I ran into some problems.

So on my VM, it was giving some errors. These are displayed in this GitHub issue for the rust to PyTorch bindings. Im using the CPU version only, no GPU code.

After a few days of testing on different systems and thinking, I’ve concluded that PyTorch is not supported on Arm Processors. My local machine is AMD and it works well.

Architecture:           aarch64
CPU op-mode(s):       32-bit, 64-bit
Byte Order:           Little Endian
CPU(s):                 3
Vendor ID:              ARM
  Model name:           Neoverse-N1
    Model:              1
    Thread(s) per core: 1
    Core(s) per socket: 3
    Socket(s):          1
    Stepping:           r3p1
    BogoMIPS:           50.00

Above, is my current VM host machine CPU information, and as seen from the GitHub issue images, there is a problem with the torch_cpu.so .

If anyone knows any more information on this, please let me know. Ive been thinking of using docker to package my image for vms now but I still don’t think that will fix this CPU problem, might have to move to a different VM.

We are building and publishing docker containers for ARM-based server CPUs (tested on a Neoverse N1, too) here so your build issues might be specific to Rust.

I was never running PyTorch in a docker container but natively on the system, but thanks I will look into this.