Using Embedding layer for fasttext

Is there any advange of loading my fasttext embeddings into a nn.Embedding layer (which I keep freezed, btw) instead of just using the fasttext model to get the word vectors?

I mean, the big advantage of fasttext is that its ability to create an embedding of an OOV-word based on its character n-grams. If I use an Embedding layer (and not fine tune it) I am losing that point.

If you are not finetuning the embeddings, then it is fine not to use an embedding layer.

The only point there is that I should not numericalize my instances, right?

The pretrained fasttext embeddings are used to numericalize tokens.

I’m referring to converting sentences into LongTensors. If I do not use an embedding layer, I should not create a vocabulary and so on.

In torchtext, you could load the fasttext vectors into a vocab instance, which is used to numericalize tokens.