Classifying text using labeled BERT feature vectors

I have feature vectors created by BERT-as-a-Service (uncased large) (which is a tensorflow-based code) for 1000 short text each of which has one of 1 to 10 labels. How can I predict the labels using my own 1000 points dataset and feature vectors? Do you have a similar feature-vector based code you can point me to? Does it matter if these feature vectors are created by tensorflow?

It does not matter if feature vectors are created using TF, at the end they are all float values. The difference only comes to importing them to your code.

I don’t have the code but the general approach is

  1. Create a vocab for your data
  2. Create a dict or something for the embeddings
  3. Use your favourite model, while using nn.Embedding as the first layer in your model