[JIT] Scripted model aten::to failed on Mobile

Hi, everyone
Recently I have been working on transfering complex model written on Python via TorchScript to use on mobile.
After successful scripting I am unable to load model on Android device and here is the reason why. torch.jit.script produces inconsistent graph. Here is the error (from android studio):

2019-11-12 17:39:52.357 13915-13923/? I/orch.helloworl: jit_compiled:[OK] java.lang.AbstractStringBuilder java.lang.AbstractStringBuilder.append(java.lang.String) @ /apex/com.android.runtime/javalib/core-oj.jar
2019-11-12 17:39:52.357 13915-13915/? E/AndroidRuntime:     at android.app.servertransaction.LaunchActivityItem.execute(LaunchActivityItem.java:91)
        at android.app.servertransaction.TransactionExecutor.executeCallbacks(TransactionExecutor.java:149)
        at android.app.servertransaction.TransactionExecutor.execute(TransactionExecutor.java:103)
        at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2368)
        at android.os.Handler.dispatchMessage(Handler.java:107)
        at android.os.Looper.loop(Looper.java:213)
        at android.app.ActivityThread.main(ActivityThread.java:8106)
        at java.lang.reflect.Method.invoke(Native Method)
        at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:513)
        at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1100)
     Caused by: com.facebook.jni.CppException: 
    Arguments for call are not valid.
    The following operator variants are available:
      
      aten::to.other(Tensor self, Tensor other, bool non_blocking=False, bool copy=False) -> (Tensor):
      Expected a value of type 'Tensor' for argument 'other' but instead found type 'int'.
      
      aten::to.dtype(Tensor self, int dtype, bool non_blocking=False, bool copy=False) -> (Tensor):
      Expected at most 4 arguments but found 5 positional arguments.
      
      aten::to.device(Tensor self, Device device, int dtype, bool non_blocking=False, bool copy=False) -> (Tensor):
      Expected a value of type 'Device' for argument 'device' but instead found type 'int'.
      
      aten::to.dtype_layout(Tensor self, *, int dtype, int layout, Device device, bool pin_memory=False, bool non_blocking=False, bool copy=False) -> (Tensor):
      Argument dtype not provided.
      
      aten::to(Tensor(a) self, Device? device, int? dtype=None, bool non_blocking=False, bool copy=False) -> (Tensor(b|a)):
      Expected a value of type 'Optional[Device]' for argument 'device' but instead found type 'int'.
      
      aten::to(Tensor(a) self, int? dtype=None, bool non_blocking=False, bool copy=False) -> (Tensor(b|a)):
      Expected at most 4 arguments but found 5 positional arguments.
      
      aten::to(Tensor(a) self, bool non_blocking=False, bool copy=False) -> (Tensor(b|a)):
      Expected a value of type 'bool' for argument 'non_blocking' but instead found type 'int'.
    
    The original call is:
    at code/__torch__/refactored/box_regression.py:20:13
        boxes: Tensor) -> Tensor:
        _0 = torch.eq(deltas, deltas)
        _1 = torch.bitwise_not(torch.eq(torch.abs(deltas), inf))
        bool_tensor = torch.__and__(_0, _1)
        _2 = int(torch.item(torch.all(bool_tensor)))
        if bool(_2):
          pass
        else:
          ops.prim.RaiseException("Exception")
        boxes0 = torch.to(boxes, ops.prim.dtype(deltas), False, False, None)
                 ~~~~~~~~ <--- HERE
        _3 = torch.slice(boxes0, 0, 0, 9223372036854775807, 1)
        _4 = torch.slice(boxes0, 0, 0, 9223372036854775807, 1)
        widths = torch.sub(torch.select(_3, 1, 2), torch.select(_4, 1, 0), alpha=1)
        _5 = torch.slice(boxes0, 0, 0, 9223372036854775807, 1)
        _6 = torch.slice(boxes0, 0, 0, 9223372036854775807, 1)
        heights = torch.sub(torch.select(_5, 1, 3), torch.select(_6, 1, 1), alpha=1)
        _7 = torch.slice(boxes0, 0, 0, 9223372036854775807, 1)
        ctr_x = torch.add(torch.select(_7, 1, 0), torch.mul(widths, 0.5), alpha=1)
        _8 = torch.slice(boxes0, 0, 0, 9223372036854775807, 1)
    Compiled from code at /root/refactored/box_regression.py:86:16
            bool_tensor : torch.Tensor = (deltas == deltas) & ~(torch.eq(deltas.abs(), torch._six.inf))
            # bool_tensor : torch.Tensor = torch.functional.isfinite(deltas)
            assert bool_tensor.all().item(), "Error!"
            boxes = boxes.to(dtype=deltas.dtype)
                    ~~~~~~~~ <--- HERE
            ...

For some reason scripting added None as fifth parameter to function call aten::to

Sorry for the late reply. In the future, please post in the “mobile” category so the mobile developers will see them. Do you have a script or notebook that we can use to reproduce this issue?