When loading a model Unexpected key(s) in state_dict: "module.features.conv1.0.weight",

Hi,
I saved a model in one computer and then loaded it in another.

Missing key(s) in state_dict: “features.conv1.0.weight”,

Unexpected key(s) in state_dict: “module.features.conv1.0.weight”,

Why is it using “module” as a prefix when loading the model? Does this have to do anything with namespaces?
It is the same CNN on both computers using the same PyTorch version

thanks,

Usually module will be added when you’ve wrapped your model into DataParallel.
You could add your model temporarily into DataParallel to load the weights or create a new state_dict by filtering out the module prefix.

1 Like

Yes, that is true, the model originated from training using:

model = torch.nn.DataParallel(model).cuda()

However, the line that throws the exception is the line the loads the model, like so:

        if os.path.isfile(mdl):
            print("=> loading checkpoint '{}'".format(mdl))
            **model.load_state_dict(torch.load(mdl))**
            model = torch.nn.DataParallel(model).cuda()                
        else:
            print("=> no checkpoint found at '{}'".format(mdl))

Is that what you meant by?:
add your model temporarily into DataParallel

Thanks,

Never mind, resolved, thanks.

In case someone has this problem, I wrote a function to load pretrained weights, which can ignore ‘module.’ and different layer names https://github.com/KaiyangZhou/deep-person-reid/blob/master/torchreid/utils/torchtools.py#L104. Can be used directly by copy and paste.