Controlling whether pytorch uses nnpack or thnn?

Let’s say you build PyTorch with NNPACK. Now, when doing inference on an ARM CPU, how do you tell PyTorch whether to use THNN or NNPACK for tensor computations?

And, to head off some likely questions:

  • Yes, I am using PyTorch on an ARM device; no, it’s not a smartphone
  • yes, I know I can also use Caffe2 on ARM devices; no my model isn’t currently compatible with onnx or caffe2

figured it out here: Upgrade pytorch to use XNNPACK instead of NNPACK for android · Issue #30622 · pytorch/pytorch · GitHub

Assuming that you are indeed compiling PyTorch with NNPACK enabled … you are effectively at the mercy of the the checks in aten/src/ATen/native/Convolution.cpp:_convolution() for convolutions (and equivalent checks for other operators) which do their best to pick the most efficient code path based on a multitude of conditions, the core of which usually boil down to testing whether an efficient implementation indeed exists for that particular configuration or not. Your best bet then, would be to either put a breakpoint in that function or printf-debug it to see why a more efficient code path is not taken, and then modify your network to meet those requirements. It might also be the case that you are not compiling with NNPACK enabled.

1 Like