How to go from an embedding to an nn.Embedding index

If I have a tensor representing an embedding, how could I go about getting the index of the most similar embedding in PyTorch’s nn.Embedding?

The usual way is to use a softmax with the inner products (computed as a large matrix product) of the tensor with the embeddimg vectors. That puts you in the same place as with usual classification problems with a softmax on top.
There are advanced ways (eg mixture of softmax ), but this should give you a start.
Check out Stanford’s cs224n lectures on word vectors for full background and OpenNMT-py to see it in action.

Best regards