I am learning stanford CS224n.In Neural Language Model Part. The first navie model is assuming window_size = 4,concatenate word embedding [e1,e2,e3,e4] then multiplied by a matrix W >hidden_layer>output
It says one of its disadvantage is "e1 and e2 are multiplied by completely different weights in đť‘Š.No symmetry in how the inputs are processed."I am stuck in these questions.

I am really confused about what means â€śmultiplied by different weightsâ€ť can some one demonstrate it with some dimensionality analysis?

And lecturer says â€śyou are kind of learning some similar functions many timesâ€ť!.In my understanding, concatenate is very common in neural network last layer(like some fusion tasks).So is that means what i have been doing is not good?If not,what is difference between that lecturer says and our common use of Concatenate in network?
Really hope someone can help me