I benchmarked 2 different Resnet50 Models - the Apple CoreML model, available on the Apple website, and a pretrained Torchvision Resnet50 model which I converted using ONNX (Opset9) and CoreMLTools (iOS Version 13).
I tested both models on a brand new iPhone XR.
- Apple Resnet50 : CPU Inference 100ms, GPU Inference 60ms, ANE Inference 15ms
- Torchvision Resnet50 : CPU Inference 100ms, GPU Inference 60ms, ANE Inference 60ms.
As you can see, the ANE Inference for the Apple version is 4x faster.
- Apple Resnet50 : 46 MB CPU, 92 MB GPU, 35 MB ANE
- Torchvision Resnet50 : 48 MB CPU, 91 MB GPU, 91 MB ANE
The Apple version uses 3x less memory than the Torchvision one.
I am concluding the Torchvision Model doesn’t run on the ANE.
I was wondering if anyone knew how the Apple Repo models were compiled in order to run on the ANE? From the model description, the Apple Models were originally written in Keras, but as far as I know the Keras implementation isn’t different from the Pytorch Torchvision one, so I am not sure where the difference comes from.