Wiki.vec pre-trained word embedding don't make any improvment

Hello all :slight_smile:

I’m trying to apply wiki.vec as pre-trained word embedding like the below lines of code, but don’t make any improvements.

url = ‘
TRG.build_vocab(train_data, vectors=Vectors(‘’, url=url), unk_init = torch.Tensor.normal_, min_freq=MIN_COUNT

Is it enough or I missed something?

Kind regards,
Aiman Solyman

Yes you might not see improvements because in your model you need to do something like this .
I did the same thing in the project below

self.embedding = nn.Embedding(n_tokens, embedding_size)

Link to Project that contains Your problem solution

Thank you Mr. @AbdulsalamBande :slight_smile: