This is a clipping from an article: “Here instead of using the embedding, I simply used a linear transformation to transform the 11-dimensional data into an n-dimensional space. This is similar to the embedding with words.”

I can’t understand how I can do such a linear transformation. Can someone explain to me with a simple example?

Maybe the author is just using a simple linear layer: `nn.Linear(11, n_dim)`

.

1 Like

The vector that we presented in n-dimensional space (similar to ebedding), should its n-dimensional value be constant, or is it a trained parameter?

The n-dimensional outputs of the linear layer will not be constant (for reasonable weight values). They will be different for each input vector and the projection of a given input vector changes over the course of the training as the weights of the linear layer are adjusted.

1 Like