Marat
(Закиров Марат)
February 16, 2021, 12:11pm
1
For my best knowledge there 3 ways to do so
Run your PyTorch model on Android GPU using libMACE
Run your PyTorch model on Android GPU using libMACE | by Vitaliy Hramchenko | Medium
Which needs to ONNX convertation
Via Android NNAPI
PyTorch Mobile Now Supports Android NNAPI | by PyTorch | PyTorch | Medium
(Prototype) Convert MobileNetV2 to NNAPI — PyTorch Tutorials 1.7.1 documentation
Which requires special PyTorch revision?
PyTorch on Vulkan backend
tutorials/prototype_source at master · pytorch/tutorials · GitHub
Which allows only float32 because Vulkan itself is a Graphic engine I think.
Could you please share your thoughts, because I want to make as less effort as I possibly can. I know that all these features are prototypes but its is ok for me.
2 Likes
If you use float32 model - the most straightforward will be to use our Vulkan backend. ( tutorials/vulkan_workflow.rst at master · pytorch/tutorials · GitHub )
You might found that not all the operators for your model are yet supported.
Please message them here if you find it.
You can use Vulkan backend from java, specifying Device.VULKAN
during model loading
Module module = Module.load("$PATH", Device.VULKAN)
Since 1.8 version Vulkan backend is included in our main gradle artifact (org.pytorch:pytorch_android), that you can skip libtorch building process.
For quantized model is better to follow Android NNAPI way.
You do not need custom build of pytorch and can use our default maven artifact.
3 Likes
Marat
(Закиров Марат)
April 15, 2022, 8:35am
4
torchversion 1.11.0
RuntimeError: falseINTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1646755861072/work/torch/csrc/jit/passes/vulkan_rewrite.cpp":272, please report a bug to PyTorch. Mobile optimizaiton only available with Vulkan at the moment. Vulkan is not enabled. Please build with USE_VULKAN=1
I created bug report
opened 08:51AM - 15 Apr 22 UTC
oncall: mobile
### 🐛 Describe the bug
```
from torch import nn
import torch
from torch.ut… ils.mobile_optimizer import optimize_for_mobile
model = nn.Sequential(nn.Conv2d(3, 3, kernel_size=1))
model = model.cpu()
model.eval()
example0 = torch.rand(1, 3, 4, 4)
with torch.no_grad():
traced = torch.jit.trace(model, example0)
print('torch version is', torch.__version__)
optimized_traced = optimize_for_mobile(traced, backend='vulkan')
optimized_traced._save_for_lite_interpreter("./traced_model_vulkan.ptl")
```
Message
```
torch version is 1.11.0
Traceback (most recent call last):
File "/home/marat/OCR/yolo3/exmobile.py", line 15, in <module>
optimized_traced = optimize_for_mobile(traced, backend='vulkan')
File "/home/marat/anaconda3/envs/cexp/lib/python3.7/site-packages/torch/utils/mobile_optimizer.py", line 67, in optimize_for_mobile
optimized_cpp_module = torch._C._jit_pass_vulkan_optimize_for_mobile(script_module._c, preserved_methods_str)
RuntimeError: falseINTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1646755861072/work/torch/csrc/jit/passes/vulkan_rewrite.cpp":272, please report a bug to PyTorch. Mobile optimizaiton only available with Vulkan at the moment. Vulkan is not enabled. Please build with USE_VULKAN=1
```
### Versions
torch 1.11.0 ubuntu 18.04 installed by
conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
Marat
(Закиров Марат)
April 15, 2022, 3:58pm
5
I make
mModule = Module.load(MainActivity.assetFilePath(getApplicationContext(), "traced_model_vulkan.pt"), Device.VULKAN);
And get from AndroidStudio
/mnt/hugedisk/AndroidTorch/android-demo-app/ObjectDetection/app/src/main/java/org/pytorch/demo/objectdetection/MainActivity.java:190: error: no suitable method found for load(String,Device) mModule = Module.load(MainActivity.assetFilePath(getApplicationContext(), "traced_model_vulkan.pt"), Device.VULKAN); ^ method Module.load(String,Map<String,String>,Device) is not applicable (actual and formal argument lists differ in length) method Module.load(String) is not applicable (actual and formal argument lists differ in length)
Linbin
(Linbin Yu)
April 26, 2022, 11:41pm
6
This API was changed (need one more parameter for extra files):
/**
* Loads a serialized TorchScript module from the specified path on the disk to run on specified
* device.
*
* @param modelPath path to file that contains the serialized TorchScript module.
* @param extraFiles map with extra files names as keys, content of them will be loaded to values.
* @param device {@link org.pytorch.Device} to use for running specified module.
* @return new {@link org.pytorch.Module} object which owns torch::jit::Module.
*/
public static Module load(
final String modelPath, final Map<String, String> extraFiles, final Device device) {
if (!NativeLoader.isInitialized()) {
NativeLoader.init(new SystemDelegate());
}
return new Module(new NativePeer(modelPath, extraFiles, device));
}
/**
* Loads a serialized TorchScript module from the specified path on the disk to run on CPU.
*