Should I use Embedding layer if I am vectorizing input text myself?

I want to run experiments by training the same model with different text representations: averaging word embeddings (e.g. Word2Vec, FastText), using sentence encoders (e.g. InferSent, USE), and lastly using my own representations acquired by training an AutoEncoder. I am planning to make a function that given an input text converts it into a vector using one of the aforementioned techniques. Thus, if I am vectorizing the input text myself, is there any reason for me to have an Embedding layer? Note, I am not planning to train my vectors further.

Is the Embedding layer just a convenient way to vectorize text and potentially train the word vectors on downstream tasks? Is there any reason to use the Embedding layer if I am using sentence encoders like InferSent? I would imagine I could just take a sentence, pass it through a sentence encoder thus getting a vector, and then pass it into the model.