Why doesn't consider FloatTensor for embedding layer?

Embedding layers are treated as “lookup” layers which are indexed by the input and thus require integer types. Could you explain your use case a bit more and how floating point types should work in embeddings?