Caffe2 backend for ONNX is slower?

Hi, I have try the tutorial: Transfering a model from PyTorch to Caffe2 and Mobile using ONNX.

Howerver,I found the infer speed of onnx-caffe2 is 10x slower than the origin pytorch AlexNet.
Anyone help? Thx.

Machine:
Ubuntu 14.04
CUDA 8.0
cudnn 7.0.3
Caffe2 latest.
Pytorch 0.3.0