I’m not an expert on NLP, but I would assume pretrained embeddings might be treated similar to pretrained kernels in a CNN.
If you are dealing with data from the same or similar domain (e.g. English texts / “natural” images), the pretrained parameters might just work and probably even better than training from scratch, e.g. if you are working with a small dataset.
That being said, if your data comes from another domain (e.g. source code / medical images), the pretrained layers might not work pretty well and you would need to finetune them or train form scratch.
In the case of NLP and embeddings I would assume this use case to fail pretty badly, since embeddings are used as a lookup table for each word index.
If the words you are dealing with are not in the dictionary, you might get an “unknown token” for a lot of words, which might make the embedding useless.