How can I enable GPU support for torch mobile on android?

Hi! Is there any way to use GPU delegate (like options.gpuDelegate for tflite models) for torch mobile on android to speed up inference time? I’ve found CPU only options both for QNNPACK and FBGEMM models . Thank you!

1 Like

Is there any update on this? We would really love to go forward using pytorch mobile, but this is a blocker.

I’am waiting for this feature. :grinning: :grinning: :grinning:

They announced GPU support at developer day. But there is no doc. I’m curious if it’s release yet.

Mobile GPU support is yet a prototype feature available on the nightly build.
Prototype features are not documented.

You can find an example here:

Isn’t this for NNAPI? I assume on Qualcomm chip it will just use DSP/NPU?

NNAPI can use both GPUs and DSP/NPU.
For example, if you quantize your models to 8bits, DSP/NPU will be used otherwise GPU will be the main computing unit.
The quantization is optional in the above example.

More benchmarks and information could be found here.