Slow predict time for built pytorch in Raspberry 3

Hello,

I just built from source the pytorch v1.0.1 using the following commands (did it twice, once inside a Raspberry Pi 3+ and a second time with a qemu emulating a armv7, yelding the same results):

export NO_CUDA=1
export NO_DISTRIBUTED=1
export NO_MKLDNN=1 
export CFLAGS="-march=armv7-a -mtune=cortex-a8 -mfpu=neon -mfloat-abi=hard  -O3"

python3 setup.py bdist_wheel

I did got a valid whell, but after running a test script for evaluating a difference (comparing to keras+Tf) in performance of some predictions, I realized that the pytorch prediction was roughly twice as slow as the Keras+TF prediction time.

What is weird, is that when I do the same test (multiple times) in a x89 processor (using existent pytorch wheels) the pytorch predictions show itself slightly faster than the Keras+TF.

Is there anything I need to do to get the built pytorch run predictions faster?
I did tweak those compile flags and runtime flags (Like NUM_CPUS=4 && OMP_NUM_THREADS=4 && MKL_NUM_THREADS=4) but the best result I got was the “twice as slow” in predictions.

(Needless to say, but of course the model used for testing TF and Pytorch were the same, with the same number of parameters and the input alike. And the pytorch model was exported with jit)

One thing to have on arm is NNPack. It made a huge difference for me on Android arm.

Best regards

Thomas

But isn’t the NNPack built along with pytorch (when I dont set NO_NNPACK) (with the infamous Brace y=urself, we are building NNPACK)?

Here is my cmake command:

cmake /home/pi/pytorch -DPYTHON_EXECUTABLE=/usr/bin/python3 -DPYTHON_LIBRARY=/usr/lib/libpython3.5m.so.1.0 -DPYTHON_INCLUDE_DIR=/usr/include/python3.5m 
-DBUILDING_WITH_TORCH_LIBS=ON -DTORCH_BUILD_VERSION=1.0.0a0+8322165 -DCMAKE_BUILD_TYPE=Release -DBUILD_TORCH=ON -DBUILD_PYTHON=ON -DBUILD_SHARED_LIBS=ON 
-DBUILD_BINARY=OFF -DBUILD_TEST=ON -DINSTALL_TEST=ON -DBUILD_CAFFE2_OPS=ON -DONNX_NAMESPACE=onnx_torch -DUSE_CUDA=0 -DUSE_DISTRIBUTED=OFF -DUSE_FBGEMM=0 
-DUSE_NUMPY=ON -DNUMPY_INCLUDE_DIR=/usr/local/lib/python3.5/dist-packages/numpy/core/include -DUSE_SYSTEM_NCCL=OFF -DNCCL_INCLUDE_DIR= -DNCCL_ROOT_DIR=
 -DNCCL_SYSTEM_LIB= -DCAFFE2_STATIC_LINK_CUDA=0 -DUSE_ROCM=0 -DUSE_NNPACK=1 -DUSE_LEVELDB=OFF -DUSE_LMDB=OFF -DUSE_OPENCV=OFF -DUSE_QNNPACK=1 
 -DUSE_FFMPEG=OFF -DUSE_GLOG=OFF -DUSE_GFLAGS=OFF -DUSE_SYSTEM_EIGEN_INSTALL=OFF -DCUDNN_INCLUDE_DIR= -DCUDNN_LIB_DIR= -DCUDNN_LIBRARY= -DUSE_MKLDNN=0 
 -DNCCL_EXTERNAL=0 -DCMAKE_INSTALL_PREFIX=/home/pi/pytorch/torch/lib/tmp_install '-DCMAKE_C_FLAGS= -march=armv7-a -mtune=cortex-a8 -mfpu=neon
  -mfloat-abi=hard -O3' '-DCMAKE_CXX_FLAGS= -march=armv7-a -mtune=cortex-a8 -mfpu=neon -mfloat-abi=hard -O3' '-DCMAKE_EXE_LINKER_FLAGS= 
  -Wl,-rpath,$ORIGIN  -march=armv7-a -mtune=cortex-a8 -mfpu=neon -mfloat-abi=hard -O3' '-DCMAKE_SHARED_LINKER_FLAGS= -Wl,-rpath,$ORIGIN 
   -march=armv7-a -mtune=cortex-a8 -mfpu=neon -mfloat-abi=hard -O3' -DTHD_SO_VERSION=1 -DCMAKE_PREFIX_PATH=/usr/lib/python3/dist-packages

Hello tom,
I think the NNPack is already built alongside pytorch, as seen in my previous post.

I don’t have the hardware to measure this, so I cannot really comment. It would also depend on the architecture of your net.

Best regards

Thomas