nn.TransformerEncoderLayer getting error after translated to pytorch_android_lite

Hello,

I am getting an error when running my transformer with pytorch_mobile_lite.

Here’s minimal code to get the error:

class TestModel(pl.LightningModule):
    def __init__(self, ):
        super(TestModel, self).__init__()
        self.model = TransformerEncoderLayer(128, 2, 512)

    def forward(self, src)->torch.Tensor:
        src_real = torch.randn(1, 80, 128)
        rtn = self.model(src_real)
        return rtn

model = TestModel()
s = script(model)
m = optimize_for_mobile(s)
m._save_for_lite_interpreter("model.ptl")`

After the torchscript is generated, replace the model of the HelloWorldExample. Running the generated apk, I’ll get the following error from adb:

--------- beginning of crash

2021-10-21 01:03:32.470 12338-12338/org.pytorch.helloworld E/AndroidRuntime: FATAL EXCEPTION: main
Process: org.pytorch.helloworld, PID: 12338
java.lang.RuntimeException: Unable to start activity ComponentInfo{org.pytorch.helloworld/org.pytorch.helloworld.MainActivity}: com.facebook.jni.CppException: Following ops cannot be found. Check Internal Login for the fix.{aten::sqrt.int, } ()
Exception raised from print_unsupported_ops_and_throw at …/torch/csrc/jit/mobile/import.cpp:206 (most recent call first):
(no backtrace available)
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:3639)
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:3796)

I am running 1.9.0+cu111 with org.pytorch:pytorch_android_lite:1.9.0

Thank you.

Are you using selective build? It seems somehow the aten::sqrt op is not built with mobile build so it is not able to find that operator.

Hi kimishpatel,
I suppose the pytorch 1.9.0+cu111 and org.pytorch:pytorch_android_lite:1.9.0 should match? I generate the torchscript with the code using the versions without manipulation.
Should the unsupported operation be avoided by the torch.jit.trace itself? Or the users would avoid using them?
Thank You,

I can confirm that pytorch_android_lite:1.9.0 does not have this op (without selective build). But it works in the new 1.10 release. Just change the dependency to

implementation 'org.pytorch:pytorch_android_lite:1.10.0'
implementation 'org.pytorch:pytorch_android_torchvision_lite:1.10.0'
1 Like

I will be checking this in the following days, thanks for that.

This is fixed in the pytorch_android 1.10.0
Thank you guys for helping me.

1 Like