Word2Vec/GloVe Pretrained embeddings of custom dimensionality

I want to get the vector embeddings of a custom dimension using some word embedding models such as word2vec or GloVe on PyTorch?
For example:

word = "cat"
output (of my desired size say 1024 dimension vector) = some_embedding_function(word)

Is there any way to do this on pytorch using some pre-trained models available?



If you use a pretrained embedding, the output dimension is then fixed by it. If you want to decrease the dim, I guess you could try a pca … but I’m not really sure it is a good idea.

Hi @barthelemymp,
Thanks for your reply. Is there any documentation on PyTorch that you can point me to?
Thanks again