How to Train Word Embedding With Pytorch

I am trying learn and practice how to train the embedding of Vocabulary set using pytorch.

https://pytorch.org/tutorials/beginner/nlp/word_embeddings_tutorial.html

loss = loss_function(log_probs, torch.tensor([word_to_ix[target]], dtype=torch.long))

At the above tutorial example, the loss was calculated between log_probs which is 4 x 10 tensor (4 for context words number and 10 is embedding_dimension) and the word index of target which is integer ranged from 0 to 49.

I can’t understand why the code doesn’t compare between its context embedding with its target embedding but just compare with its class index, which is a mere integer holding no information…

I think one must go back to the embedding parameter then call it, and compare it with its context I guess.

Is it just because it’s only for the tutorial or I am misinterpreting some point?

Thanks for your help in advance.

Check documentation for nn.NLLLoss here: https://pytorch.org/docs/stable/nn.html. Function is created the way, that is takes class index as an argument for target.

Then it converts class index to 1/0 vector, takes log and compares it with prediction vector.
Comparison is performed.