Say I have something like:
import torch from torch import Tensor from torch import nn import torch.nn.functional as F class MyModule(nn.Module): def __init__(self, scale_factor: float): super().__init__() self.scale_factor = scale_factor def forward(self, x: Tensor) -> Tensor: x = F.interpolate(x, scale_factor=self.scale_factor) return x model = MyModule(5) model(torch.zeros(16,1,20,20)) torch.jit.script(model)
This raises an error:
Expected a value of type ‘Optional[float]’ for argument ‘scale_factor’ but instead found type ‘int’.
Actually, for some reason this toy model wasn’t able to reproduce the error I get on my real model which is:
Module ‘MyModule’ has no attribute ‘final_scale_factor’ (This attribute exists on the Python module, but we failed to convert Python type: ‘numpy.int64’ to a TorchScript type. Only tensors and (possibly nested) tuples of tensors, lists, or dictsare supported as inputs or outputs of traced functions, but instead got value of type int64… Its type was inferred; try adding a type annotation for the attribute.):
In both cases it seems to be an issue of annotation. How would I annotate an instance property? I tried annotating in the
__init__ method like
self.scale_factor = torch.jit.annotate(float, scale_factor) and that doesn’t help either.
EDIT - I was able to sort my issue with
self.scale_factor = torch.tensor(scale_factor) then using
Tensor.item() on the other end. But I’d still like to know the answer!