Out-of-vocabulary Embedding Layer and pre trained model

I’m using a pre-trained fasttext model for the weights of my embedding layer for text classification. Once the model is trained, the embeddings layer only has weights corresponding to the vocabulary of the training data.

When predicting something from the test set when I encounter a word not in the training data, it gets assigned an out of vocabulary index and of course cannot use weights from the pre-trained embeddings. One advantage of fasttext is that it can get a vector for oot words but I’m not sure how this can be implemented for the embedding layer? Is there a clever way to do this? Thank you!