Why doesn't consider FloatTensor for embedding layer?

We’ve created a model for predicting time based data (our data type is float). Beside that, we’ve used scaler for normalising parameters. We want to use the embedding layer for tracking similarities. But, we got this error:

return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got torch.FloatTensor instead (while checking arguments for embedding)

We got that it only accepts long (embedding layer only accepts long). Please help me. Or, prepare a solution in further versions for users who their input is type of float.

Embedding layers are treated as “lookup” layers which are indexed by the input and thus require integer types. Could you explain your use case a bit more and how floating point types should work in embeddings?