Wiki.vec pre-trained word embedding don't make any improvment

Hello all :slight_smile:

I’m trying to apply wiki.vec as pre-trained word embedding like the below lines of code, but don’t make any improvements.

MIN_COUNT = 2
url = ‘https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.ar.vec
TRG.build_vocab(train_data, vectors=Vectors(‘wiki.ar.vec’, url=url), unk_init = torch.Tensor.normal_, min_freq=MIN_COUNT

Is it enough or I missed something?

Kind regards,
Aiman Solyman

Yes you might not see improvements because in your model you need to do something like this .
I did the same thing in the project below

self.embedding = nn.Embedding(n_tokens, embedding_size)
self.embedding.weight.data.copy_(pretrained_vectors)

Link to Project that contains Your problem solution

Thank you Mr. @AbdulsalamBande :slight_smile: