Does TorchScript / JIT support __iter__? Odd NotImplementedError

So, I ran into the following error while using a model that have been converted to JIT via torch.jit.script:

     15     def __init__(self, module_list: Iterable[nn.Module]) -> None:
---> 16         self.module_dict = dict.fromkeys(module_list, None)

/usr/local/lib/python3.7/dist-packages/torch/jit/ in __iter__(self)
    617         def __iter__(self):
--> 618             return self.forward_magic_method("__iter__")
    620         def __getitem__(self, idx):

/usr/local/lib/python3.7/dist-packages/torch/jit/ in forward_magic_method(self, method_name, *args, **kwargs)
    612                 RecursiveScriptModule, method_name
    613             ):
--> 614                 raise NotImplementedError()
    615             return self_method(*args, **kwargs)

I assume that the cause of this issue might be __iter__, but I’m not entirely.

It says on the PyTorch Github repo here that __iter__ is not supported yet: pytorch/jit_python_reference.rst at master · pytorch/pytorch · GitHub, but this file hasn’t been updated in over a year.

I figured out the issue! It seems that any module run through torch.jit.script gains the __iter__ attribute that one would expect in lists, tuples, tensors, etc…

print(hasattr(model.layer, "__iter__")) # False

jit_model = torch.jit.script(model)

print(hasattr(jit_model.layer, "__iter__")) # True

Though, how do I check if I have a list of JIT model layers or a single layer?