Check which engine is used for convolution operations

Hello,

I am trying to figure out which engine is being used for convolution operations when I run my model with PyTorch Mobile. I would like to change it to XNNPACK if that isn’t the default. Any help on how to do any of these two things would be appreciated.

Hi @JustasZ. If you compile pytorch from the current master branch, the XNNPACK has already been enabled on mobile. But if you want to use cocoapods or Android maven, please wait for the 1.6.0 release, which will set XNNPACK as the default computation kernel.