[JIT] [Mobile] Wrong substitution of aten::to

Hello again,

As I’ve introduced here ([JIT] Scripted model aten::to failed on Mobile) there might be a bug inside TorchScript scripting or inside mobile library.
I’ve found out that any aten::to scripted from Python code to JIT comes with extra parameter inside function call. Let me show an example, here is python code and and the place where interpreter finds an error

Compiled from code at /root/anaconda2/envs/pytorch-nightly/lib/python3.7/site-packages/torchvision/ops/boxes.py:68:14
            in decreasing order of scores
        """
        if boxes.numel() == 0:
            return torch.empty((0,), dtype=torch.int64, device=boxes.device)
        # strategy: in order to perform NMS independently per class.
        # we add an offset to all the boxes. The offset is dependent
        # only on the class idx, and is large enough so that boxes
        # from different classes do not overlap
        max_coordinate = boxes.max()
        offsets = idxs.to(boxes) * (max_coordinate + 1)
                  ~~~~~~~ <--- HERE
        boxes_for_nms = boxes + offsets[:, None]
        keep = nms(boxes_for_nms, scores, iou_threshold)
        return keep

here’s what is wrong

Arguments for call are not valid.
    The following operator variants are available:
      
      aten::to.other(Tensor self, Tensor other, bool non_blocking=False, bool copy=False) -> (Tensor):
      Expected at most 4 arguments but found 5 positional arguments.
      
      aten::to.dtype(Tensor self, int dtype, bool non_blocking=False, bool copy=False) -> (Tensor):
      Expected at most 4 arguments but found 5 positional arguments.
      
      aten::to.device(Tensor self, Device device, int dtype, bool non_blocking=False, bool copy=False) -> (Tensor):
      Expected a value of type 'Device' for argument 'device' but instead found type 'Tensor'.
      
      aten::to.dtype_layout(Tensor self, *, int dtype, int layout, Device device, bool pin_memory=False, bool non_blocking=False, bool copy=False) -> (Tensor):
      Argument dtype not provided.
      
      aten::to(Tensor(a) self, Device? device, int? dtype=None, bool non_blocking=False, bool copy=False) -> (Tensor(b|a)):
      Expected a value of type 'Optional[Device]' for argument 'device' but instead found type 'Tensor'.
      
      aten::to(Tensor(a) self, int? dtype=None, bool non_blocking=False, bool copy=False) -> (Tensor(b|a)):
      Expected a value of type 'Optional[int]' for argument 'dtype' but instead found type 'Tensor'.
      
      aten::to(Tensor(a) self, bool non_blocking=False, bool copy=False) -> (Tensor(b|a)):
      Expected a value of type 'bool' for argument 'non_blocking' but instead found type 'Tensor'.
    
    The original call is:
    at code/__torch__/torchvision/ops/boxes.py:16:9
      if torch.eq(torch.numel(boxes), 0):
        _3 = ops.prim.device(boxes)
        _1, _2 = True, torch.empty([0], dtype=4, layout=None, device=_3, pin_memory=None, memory_format=None)
      else:
        _1, _2 = False, _0
      if _1:
        _4 = _2
      else:
        max_coordinate = torch.max(boxes)
        _5 = torch.to(idxs, boxes, False, False, None)
             ~~~~~~~~ <--- HERE
        offsets = torch.mul(_5, torch.add(max_coordinate, 1, 1))
        _6 = torch.slice(offsets, 0, 0, 9223372036854775807, 1)
        boxes_for_nms = torch.add(boxes, torch.unsqueeze(_6, 1), alpha=1)
        _4 = __torch__.torchvision.ops.boxes.nms(boxes_for_nms, scores, iou_threshold, )
      return _4

For some reason JIT provided code with extraneous parameter None. It will do it even we provide all named parameters to define which overriden function should be used.

Notice that please. There’s high probability of bug. TorchScript produces aten::to with extra None parameter

I am still coming across this issue. TorchScript script produces redundant None parameter

Sorry for the late reply. In the future, please post in the “mobile” category so the mobile developers will see them. Do you have a script or notebook that we can use to reproduce this issue?