Use .state_dict() and .parameters() after wrapping module in dict

When defining my own model, I am doing things like

class MyNet(nn.Module):
    def __init__(self):
        super().__init__()
        self.fc = {'fc1' : nn.Linear(10, 20)}
model = MyNet()
params = list(model.parameters())
print(len(params))
>> 0
state_dict = model.state_dict()
print(state_dict)
>> OrderedDict()

To be specific, I organize the internal sub modules with dict. I then find that several functionalities can not work properly with this design, including .state_dict(), .parameters() as shown above. I guess that after wrapping modules within dict, pytorch can no longer locate them.

I know one of ways to solve this is to overwrite those functions, where we unfold the dict and extract either .parameters() and .state_dict() from the sub modules. I wonder is there any neat thing that can be done to resolve this issue.

This is expected as PyTorch does not properly register plain Python lists and dicts.
Use nn.ModuleDict and nn.ModuleList instead and the parameters will be registered and show up in model.parameters() and the state_dict.