Hello,
I am using Caffe2 for mobile and I am trying to test the XNNPACK engine for convolutions. Until now I have been using NNPACK and it has resulted in some performance boost compared to the default engine, though, when I try to change the engine to XNNPACK, it seems that it switches to the default engine, because the performance becomes worse. Is there a way to use XNNPACK in caffe2 (built from pytorch v1.5) and if yes, how should it be done?
Code:
caffe2::NetDef modelDef;
CAFFE_ENFORCE(caffe2::ReadProtoFromFile("path to model def file", &modelDef));
for (int i = 0; i < modelDef.op_size(); ++i)
{
caffe2::OperatorDef *opDef = modelDef.mutable_op(i);
if (opDef->type() == "Conv")
{
opDef->set_engine("NNPACK"); // Works
opDef->set_engine("XNNPACK"); // Probably not working
}
}
NOTE: The code was built with XNNPACK support.