How to use pretrained FastText-Embeddings for LSTM Tagger

I want to use german pretrained fasttext embeddings for my LSTM tagger model.
There are a few options to get the full fasttext embedding collection. Which would you recommend using?
And how do I load the embeddings for each text of the training data so that the embedding layer of the model already gets the fasttext representation?
Can you maybe give me an example code or a tutorial which I can recreate? Didn’t find anything useful yet. For training data I use CoNLL2003 format.
Thank you!

Here is something better.

import torch
import torch.nn as nn
data_dictionary = {"a":1,"apple":2,"milk":3}
n_tokens = 3
embedding_size = 8
embedding = nn.Embedding(n_tokens, embedding_size)
pretrained_fasttext_embeddings  = torch.rand((n_tokens,embedding_size))
embedding.weight.data.copy_(pretrained_fasttext_embeddings)
print(embedding.weight)
tensor([[0.3748, 0.4387, 0.9356, 0.7790, 0.7401, 0.3412, 0.1741, 0.4702],
        [0.6634, 0.8787, 0.9448, 0.2775, 0.5960, 0.6934, 0.6094, 0.0103],
        [0.8099, 0.9782, 0.4780, 0.0253, 0.5966, 0.0216, 0.5862, 0.6692]],
       requires_grad=True)
[0.3748, 0.4387, 0.9356, 0.7790, 0.7401, 0.3412, 0.1741, 0.4702] is the embedding for "a"
0.6634, 0.8787, 0.9448, 0.2775, 0.5960, 0.6934, 0.6094, 0.0103] is the embedding for "apple"
[0.8099, 0.9782, 0.4780, 0.0253, 0.5966, 0.0216, 0.5862, 0.6692] is the embedding for "milk"

Thank you! What kind of punctuations are missing between “pretrained”, “fasttext” and “embeddings”?
And do I then give the nn.Embedding() layer a list of strings, therefore a list of the tokens?