com.facebook.jni.CppException: Method 'forward' is not defined.
Exception raised from get_method at ../../../../src/main/cpp/libtorch_include/arm64-v8a/torch/csrc/jit/api/object.h:103 (most recent call first):
(no backtrace available)
Again the same code loading the model without optimize_for_mobile works.
So I ended up instead refactoring my classes from forward/inference to forward_train/forward and now it works.
Only that (without any further quantization or fuse calls) the optimize_for_mobile version runs 5 seconds for a given task while the unoptimized one runs for 4 seconds.
Multiple calls but it’s 3 different models in sequence that are called.
Model A and B are rather heavy on GRUs and LSTMs together with a few (1D) convolutions.
B also uses some Gaussian upsampling mechanism with a bit of manual implementation.
Both are significantly slower in the “optimize for mobile” version.
The biggest part of it is this layer
The rest is also quite similar to the model in this repo.