Okay. I’m first going to talk getting your app to work. Then trying with mine. I’m using ndk18 now by the way.
I compile using
git clone --recursive and
ANDROID_NDK=<path/to/Android/Sdk/ndk-bundle> scripts/build_android.sh -DCTOOLCHAIN=clang.
Using pytorch from your github link leads to the problem that it tries to get eigen from
'https://github.com/RLovelett/eigen.git which doesn’t exist. I fix it by getting eigen from the current remote repo. Then when compiling an
error: call to 'stod' is ambiguous, of course there is a github issue on this with no response. When finished, not all requested libraries have been built, and I realize that for some reason
pytorch/c10 doesn’t exist.
git submodule update does nothing, so I give up and try and just use the latest pytorch.
So I get that and compile it (and it worked! amazing). Obviously I change the paths to the static libraries to where I have them. The first build attempt fails, I need to add libqnnpack. I am by the way using the header files you’ve included (so potentially different). I get an error:
../../../../src/main/cpp/ATen/core/TensorImpl.h:755: error: undefined reference to 'caffe2::GetAllocator(at::DeviceType const&)'
../../../../src/main/cpp/ATen/core/TensorImpl.h:763: error: undefined reference to 'at::PlacementDeleteContext::makeDataPtr(at::DataPtr&&, void (*)(void*, unsigned int), unsigned int, at::Device)'
So I stop using your headers and instead include my own:
Note the git/pytorch/build/aten/src/, I had to link to the build of pytorch I had for local use because running
scripts/build_android.sh doesn’t actually create all the required ATen header files.
And after that it worked!
Now about my app.
Because of a ‘multiple definitions’ error with another library (openfst) I compile normally and then with
-DBUILD_SHARED_LIBS=YES (after renaming the
build_android dir of course), which doesn’t complete, but gets far enough to give me a
libc10.so which I use to avoid the error (I’m mentioning this just to give a fuller picture of what I’m doing).
Then the app runs but can’t instantiate the Predictor (fails on
new caffe2::Predictor(init_net, predict_net)). I don’t get any error message. I used
onnx_graph_to_caffe2_net to export the model,
mobile_exporter doesn’t like having Long inputs it seems, and if I use a model that has floats I get the same error in the end anyways.
My model is not related to the camera thing at all, but it’s quite small (6MB) and simple to use. See this gist for how I run it (using pytorch built for my PC, just to check whether caffe2 works locally). Model is here.
EDIT: But the model is irrelevant I think. Even just a single fully connected layer as a model results in the same problem.
I’d say the ‘outrage’ is just regarding the fact that there are a lot of people having problems but noone seems to get any responses. Even just a
I didn’t have to change the caffe2 source, so that could probably be something where things went wrong for you.
is very helpful (so even if you just say “this seems like you did something wrong”). But most people aren’t getting anything at all.