Hi, thank you always for your help.
When I load my trained model using load_state_dict, it raises the following error:
File “/root/.pyenv/versions/anaconda3-5.3.1/lib/python3.7/site-packages/torch/optim/optimizer.py”, line 114, in load_state_dict
raise ValueError("loaded state dict contains a parameter group "
ValueError: loaded state dict contains a parameter group that doesn’t match the size of optimizer’s group
I could loaded trained models until yesterday, and have suddenly become unable to load them.
I will looking forward to hearing any suggestion to correct this issue.
Hi tom,
Thank you for your kind reply.
As you suggested, I used wrong numbers of parameters.
I tried to load parameters for different layer depth by mistake.
It now works by setting the model size and parameter size.
Hi Tom,
I met an issue that I want to add an additional layer on the original model but still using a pretrained checkpoint. Is there a possible solution?
Yes, you can either modify the state dict or make load_state_dict less strict.
Personally, I tend to favor the former variant (having a translation function for keys and/or adding the model.state_dict() values for things not in the saved state dict) because it seems less likely that I forget things, but the latter would probably be faster.