Hello,
as far as I know is the WordEmbedding layer used to store/retrieve embeddings of words for a given indices (depending or embedding size and dict size).
So for example a word is represented as an index and a sentence as a list of indexes.
What if I don’t want to pass indices, but for example a one-hot encoding of my given word(s). So more or less to the encoding manually? I looked at the WordEmbedding class but couldn’t really figure out if subclassing WordEmbedding myself or if I should build my own embedding class (one-hot encoding + linear layer).
I hope it is clear, what I want to achieve.
Greetings,
Patrick