Jit trace issue

My model is Object detection model and it contains interpolate layer. Model is converting to quantized model perfectly but when I am doing torch.jit.script it is giving issues.
I tried changing torch and torchvision to currently available nightly version but it is still giving issues.

Error while in stable version of torch and torchvision -

Arguments for call are not valid.
The following variants are available:
aten::__interpolate(Tensor input, int? size=None, float[]? scale_factor=None, str mode='\156\145\141\162\145\163\164', bool? align_corners=None) -> (Tensor):
  Expected a value of type 'Optional[List[float]]' for argument 'scale_factor' but instead found type 'int'.

  aten::__interpolate(Tensor input, int[]? size=None, float[]? scale_factor=None, str mode='\156\145\141\162\145\163\164', bool? align_corners=None) -> (Tensor):
  Expected a value of type 'Optional[List[float]]' for argument 'scale_factor' but instead found type 'int'.

  aten::__interpolate(Tensor input, int? size=None, float? scale_factor=None, str mode='\156\145\141\162\145\163\164', bool? align_corners=None) -> (Tensor):
  Expected a value of type 'Optional[float]' for argument 'scale_factor' but instead found type 'int'.

  aten::__interpolate(Tensor input, int[]? size=None, float? scale_factor=None, str mode='\156\145\141\162\145\163\164', bool? align_corners=None) -> (Tensor):
  Expected a value of type 'Optional[float]' for argument 'scale_factor' but instead found type 'int'.

Error after moving to nighly version of torch and torchvision -

Arguments for call are not valid.
The following variants are available:

  aten::__interpolate(Tensor input, int? size=None, float[]? scale_factor=None, str mode="nearest", bool? align_corners=None, bool? recompute_scale_factor=None) -> (Tensor):
  Expected a value of type 'Optional[List[float]]' for argument 'scale_factor' but instead found type 'int'.

  aten::__interpolate(Tensor input, int[]? size=None, float[]? scale_factor=None, str mode="nearest", bool? align_corners=None, bool? recompute_scale_factor=None) -> (Tensor):
  Expected a value of type 'Optional[List[float]]' for argument 'scale_factor' but instead found type 'int'.

  aten::__interpolate(Tensor input, int? size=None, float? scale_factor=None, str mode="nearest", bool? align_corners=None, bool? recompute_scale_factor=None) -> (Tensor):
  Expected a value of type 'Optional[float]' for argument 'scale_factor' but instead found type 'int'.

  aten::__interpolate(Tensor input, int[]? size=None, float? scale_factor=None, str mode="nearest", bool? align_corners=None, bool? recompute_scale_factor=None) -> (Tensor):
  Expected a value of type 'Optional[float]' for argument 'scale_factor' but instead found type 'int'.

It’s hard to tell without more context, are you able to show the code that is causing the error so we can reproduce it on our end?

For some background, TorchScript does not do coercion from int to float like Python does, so when you’re calling interpolate you might have to do something like float(my_scale_factor) where my_scale_factor is an int.

1 Like