Using pretrained word vectors

Hi I’m implementing a word level language model and need to use Pretrained word vectors.im also using torchtext for all data processing. I was wondering whether the convention is to load the word vectors directly into the embedding matrix or to do conversions from sequences of word indices to word vectors in the convert_token function in the post processing step using torch text