I saved my model by running
torch.save(model.state_dict(), 'mytraining.pt'). When I try to load it, I got the error:
size mismatch for embeddings.weight: copying a param with shape torch.Size([7450, 300]) from checkpoint, the shape in current model is torch.Size([7469, 300]).
I find it is because I use
TEXT.build_vocab(train_data, vectors=Vectors(w2v_file)) would give different vocabularies each time, but I have to get the vocabulary to construct my model:
def __init__(self, config, vocab_size, word_embeddings) .How can I fix it?