No optimization when using pretrained word embeddings

Hi,
I’m implementing a classifier, in which I want to use pretrained word embeddings (word2vec), which were trained using gensim. I also use gensim to load the saved vectors, then loop over my classifier’s training vocabulary and create a Tensor for the embeddings using the indices from the vocab.

I then use those pretrained embeddings as weights for nn.Embedding in this way:

self.word_embeddings = nn.Embedding(vocab_size, embedding_dim)
self.word_embeddings.weight = nn.Parameter(embeddings)

If I don’t set the weights, the loss decreases just fine over many epochs. However, when using the embeddings as weights, the loss decreases slightly between the first and second epoch and then just stays the same. Did anyone have a similar problem before?

Could you try this and see what happens?
self.word_embeddings.weight.data.copy_(embeddings)

Thanks Simon, seems like this helped! Is there a reason why adding the weights this way can make a difference?

Did you set optimizer before that line? It could make a different if that is the case because the optimizer is given the original Parameter object.