# Reverse nn.Embedding

how do you go back from nn.Embedding to the original discrete values?

In case you have the feature vector, you could calculate the difference and select the index with a 0 (or close to 0) diff.

I’m not sure if I understand what you mean. Let’s say you have some discrete valued vector that you want to represent in the latent space, how would I get feature vector? Like the feature vectors from a given model, extracted and take differences?

``````a = torch.nn.Embedding(10, 50)
# Discreet values
a.weight

# Indexer
a(torch.LongTensor([1,1,0]))
``````

that still gives a real-valued vector. Want to reverse this operation. Something akin to encoder decoder .

The feature vector would be the output of the embedding layer and you could calculate the difference afterwards to get the index back:

``````emb = torch.nn.Embedding(10, 50)
x = torch.tensor()

out = emb(x)
out.shape
emb.weight.shape

rev = ((out - emb.weight).abs().sum(1) < 1e-6).nonzero()
print(rev)
# > tensor([])
``````
1 Like

@safin_salih Understood

You can use this

``````a = torch.nn.Embedding(10, 50)
b = torch.LongTensor([2,8])
results = a(b)

def get_embedding_index(x):
results = torch.where(torch.sum((a.weight==x), axis=1))
if len(results)==len(x):
return None
else:
return results

indices = torch.Tensor(list(map(get_embedding_index, results)))
indices
``````
``````tensor([2., 8.])
``````
1 Like