Size mismatch of embedding weights while loading

Hi I have saved my vocab file and my LSTM model when I try to load my load LSTM model and vocab dict I get a size mismatch error similar to this
Can ayone help!?

RuntimeError                              Traceback (most recent call last)
<ipython-input-41-cc9010adf021> in <module>()
      4 #from flask_ngrok import run_with_ngrok
      5 import pickle
----> 6 from predict import *
      7 import threading
      8 

1 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in load_state_dict(self, state_dict, strict)
   1050         if len(error_msgs) > 0:
   1051             raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
-> 1052                                self.__class__.__name__, "\n\t".join(error_msgs)))
   1053         return _IncompatibleKeys(missing_keys, unexpected_keys)
   1054 

RuntimeError: Error(s) in loading state_dict for LSTM:
	size mismatch for embedding.weight: copying a param with shape torch.Size([10654, 100]) from checkpoint, the shape in current model is torch.Size([25002, 100]).

Did you manipulate this parameter after storing the state_dict?
If not, could you post a minimal code snippet to reproduce this issue, so that we could debug it?

Hi @ptrblck

I’m facing with a similar problem. In my case, it’s after storing state_dict.

Any solution ?

Feel free to answer the same questions I’ve asked in my previous post.

Yes, the parameter is manipulated after storing state_dict.

TXT = load_field('vocab.field')
checkpoint = torch.load(PREDICT_DOMAIN_MODEL_PATH, map_location = lambda storage, log: storage)
model = ClassifModel(TXT, checkpoint['params'])
model.load_state_dict(checkpoint['state_dict'])

I note that the TXT vocab was saved after the training of the model. That may be the problem.

Yes, this could be an issue assuming TXT defines some parameter shapes (e.g. based on its vocab), as it could later raise the shape mismatch (e.g. if the vocab is now smaller).