I have a bag of words x and I want to look up embeddings for words in x. But I also want to weight the words differently by a weight vector w, which has the same size as x. Can you show me the best way to do this?
I personally have never used PyTorch for my word embeddings, so my answer might not be exactly what you are looking for. I use Gensim’s Word2Vec model for learning the embeddings for each word. You can train the Word2Vec model either using CBOW or Skip-gram. After you have trained the Word2Vec model, you can get the feature vectors from the model as numpy arrays. You can obviously convert the numpy arrays to pytorch tensors using torch.from_numpy(), and then feed that to train your model.
Look at this link for reference: