[TorchScript] fail to convert list to tuple

Hi, in my torch.jit.script I would like to convert a list to a tuple, but it seems not to be the easy way.

The size of the list is fixed - although not in a way that jit prefers: which seems to be the problem
On Python 3.8 with PyTorch 1.9.1

result = []
val = torch.full((1,1),1.)
result.append(val) # size() = 1
return tuple(result)

… fails with Runtimeerror: cannot statically infer the expected size of a list in this context:

same as

val = torch.full((1,1),1.)
result:List[Tensor] = val * 1 
return tuple(result)

…fails with Runtimeerror: cannot statically infer the expected size of a list in this context:

Can someone please tell me how to do this in torch.jit.script (not trace!)
Thx

as error suggests, tuple sizes are static(hardcoded) when “compiled”, so you have to do something like
return (result[0],result[1])
and there may be a need to annotate this as Tuple[Tensor,Tensor] in some contexts

Ok, that’s meant by “statically inferred”. Coming from the C++ side, I would have expected some counterpart of C++

result[4] = {val}
retval = tuple(result)

like
result:List[Tensor,4] = val * 4

Thank you for your help

if you’re comparing to C++, List is like a vector<Tensor> of dynamic size, hence you cannot initialize specialized N-tuples from it, without runtime checks and metaprogramming anyway.
Though I agree that TorchScript is lacking in some aspects, in this case the workaround is to return a list.