Vector using pre-trained Glove vector

Hey there, I’m attempting to use a LSTM model to classify text data and wish to use pretrained Glove embeddings to do so. I’m aware that the vector is no longer an attrbute of the vocab as it used to be in previous versions.

I have downloaded the pre-trained embedding as follows

vec = torchtext.vocab.GloVe(name='6B', dim=200)

and have created the vocab(from a custom dataset) and batched it as follows

def yeild_tokens(data):
  for i in data:
    yield tokenizer(''.join(i["Text"]))

myVocab = build_vocab_from_iterator(yeild_tokens(train_data), min_freq=5, specials=('<unk>', '<BOS>', '<EOS>', '<PAD>'))
myVocab.set_default_index(myVocab["<unk>"])

text_transform = lambda x: [myVocab['<BOS>']] + [myVocab[token] for token in tokenizer(x)] + [myVocab['<EOS>']]

def collate_batch(batch):
    label_list, text_list, offsets = [], [], [0]
    for iter in batch:
         label_list.append(iter["Label"])
         processed_text = torch.tensor(text_transform(iter["Text"]), dtype=torch.int64)
         text_list.append(processed_text)
         offsets.append(processed_text.size(0))
    label_list = torch.tensor(label_list, dtype=torch.int64)
    offsets = torch.tensor(offsets[:-1]).cumsum(dim=0)
    text_list = torch.cat(text_list)
    return label_list.to(device), text_list.to(device), offsets.to(device)

dataloader = DataLoader(train_data, batch_size=8, shuffle=False, collate_fn=collate_batch)

I am curious how to do this as it seemed standard in most tutorals before the updates and the pape which I am following explicitly states the use of pre-trained embeddings. Any help is greatly appreciated.