Getting pretrained word embeddings

I am simply looking to get the word embeddings/model for some type of word embedding pretraining system like GloVe or Word2Vec. I am not quite sure how to go about this (novice in NLP), but I am guessing what the desired output would be is some lookup table where I feed in say “dog” and get the pretrained word embedding for it, but I am not certain as how I would go about setting up GloVe/Word2Vec to generate this… Also, is there literature detailing how relevant the specificity of the text the word embeddings are pretrained on effects the task if the corpuses being used for the actual task are significantly different?

The snli example uses GloVe.

Note that torchtext is a separate package (torchtext on pypi but probably better https://github.com/pytorch/text).

Best regards

Thomas