ValueError: ModuleDict update sequence element #0 has length 6; 2 is required

Hello everyone, hope you are having a great day.
I’m trying to convert a mobilenetv1 into its quantized version. I’m using this repository and I’m specifically having trouble with this line 72:

self.body = _utils.IntermediateLayerGetter(backbone, cfg['return_layers'])

and this is where I get the error!
What is causing this error and how am I supposed to get rid of it?

Update:
I need to add that I changed that class, in that instead of normal OrderedDict, I used Pytorch’s ModuleDict which is in essence an orderedDict. This is what I mean:

import torch.nn as nn
class IntermediateLayerGetter(nn.ModuleDict):
    """
    Module wrapper that returns intermediate layers from a model
    It has a strong assumption that the modules have been registered
    into the model in the same order as they are used.
    This means that one should **not** reuse the same nn.Module
    twice in the forward if you want this to work.
    Additionally, it is only able to query submodules that are directly
    assigned to the model. So if `model` is passed, `model.feature1` can
    be returned, but not `model.feature1.layer2`.
    Arguments:
        model (nn.Module): model on which we will extract the features
        return_layers (Dict[name, new_name]): a dict containing the names
            of the modules for which the activations will be returned as
            the key of the dict, and the value of the dict is the name
            of the returned activation (which the user can specify).
    Examples::
        >>> m = torchvision.models.resnet18(pretrained=True)
        >>> # extract layer1 and layer3, giving as names `feat1` and feat2`
        >>> new_m = torchvision.models._utils.IntermediateLayerGetter(m,
        >>>     {'layer1': 'feat1', 'layer3': 'feat2'})
        >>> out = new_m(torch.rand(1, 3, 224, 224))
        >>> print([(k, v.shape) for k, v in out.items()])
        >>>     [('feat1', torch.Size([1, 64, 56, 56])),
        >>>      ('feat2', torch.Size([1, 256, 14, 14]))]
    """
    _version = 2
    __annotations__ = {
        "return_layers": Dict[str, str],
    }

    def __init__(self, model, return_layers):
        if not set(return_layers).issubset([name for name, _ in model.named_children()]):
            raise ValueError("return_layers are not present in model")
        orig_return_layers = return_layers
        return_layers = {str(k): str(v) for k, v in return_layers.items()}
       # layers = OrderedDict()
        layers = nn.ModuleDict()
        for name, module in model.named_children():
            layers[name] = module
            if name in return_layers:
                del return_layers[name]
            if not return_layers:
                break

        super(IntermediateLayerGetter, self).__init__(layers)
        self.return_layers = orig_return_layers

    def forward(self, x):
        #out = OrderedDict()
        out = nn.ModuleDict()
        for name, module in self.items():
            x = module(x)
            if name in self.return_layers:
                out_name = self.return_layers[name]
                out[out_name] = x
        return out

At this point I know if I revert back this change, I no longer get any issue, however, Why would this fail? why can I not use ModuleDict instead of a normal OrderedDict?

Any help is greatly appreciated
Thanks a lot in advance

Hi. I’m getting the same error when I tried to concatenate two ModuleDict instances using the update() function. I posted a separate topic here. Apparently ModuleDict is not recognized as a Mapping. It looks like a bug to me, but I’m not sure.