# Use of nn.Embedding for floating type numbers

if I have a tensor like

``````torch.tensor([6., 4., 9., 8.], requires_grad=True)
``````

and I want to represent each of these numbers by n number of parameters, how do I use nn.Embedding, for this?
if I do

``````x = nn.Embedding(4, 10)
``````

then x(input)
then input needs to be of type LongTensor, how do I pass input as a floating tensor and embedding would represent index so, 0th row would be for 6., 1st row would be for 4. and so on?

Hi,

What would be `n` in your example, 4?
Let’s call `M` the maximum value you can have in your input tensor, and `n` the embedding dimension.
You would have to create your layer as:

``````x = nn.Embedding(M+1, n)
``````

In your example, 9 seems to be the biggest value so you can do:

``````emb = nn.Embedding(10, 10) # M = 9 and n = 10
``````

and to use it, just cast the input to long:

``````torch.tensor([6., 4., 9., 8.], requires_grad=True)

emb(t.long())
``````

I hope I understood correctly what you are trying to do here. no no, I want to represent each of these numbers as vectors, if I represent each of these numbers as vector of size 10, then n would be 10.

so a number would be represented as

``````6 -> [0.01, 0.2, 0.5, -0.2, 0.6, 0.2, 0.7, 0.3, 0.02, 0.5]
``````

similarly for 4, 9, 8
finally I want a matrix of size [4x10]
representing 40 parameters, 10 for each of 6, 4, 9, 8

if my tensor was

``````torch.tensor([2., 4., 6.], requires_grad=True)
``````

then I want a matrix of size [3x10]
representing 30 parameters, 10 for each of 2, 4, 6

I think embedding is not the correct way to do this

Well, I believe that’s exactly what my example does.

Let’s check:

``````t = torch.tensor([2., 4., 6.], requires_grad=True)
emb = nn.Embedding(10, 10)
>>> emb(t.long()).shape
torch.Size([3, 10])
``````

Each number maps to a row in `emb`.

Or am I still missing something?
What are the numbers in `[0.01, 0.2, 0.5, -0.2, 0.6, 0.2, 0.7, 0.3, 0.02, 0.5]`, learnable params, right?

1 Like

yes, it is correct, sorry my mistake.

No worries! I’m happy that that was what you were looking for The question I have here: what is interpretations of the 10 dimensional vectors? Why 10 dimensions? When the input feature is categorical, the interpretation is straight forward. In your case, the entire data universe is only 3 numbers. Are those categoricals represented by numbers or actual measurements?

@spanev Hi , thanks for this , i have a question here,after casting t to long type we no longer be able to backpropagate all the way through t (as casting is not differentiable), so is there any way we can do that also ? TIA.