What is the shared embedding means? and what is the diffrent between shared embadding, and the regular embadding in the sequance to sequance model?
I’m developing Grammar Error Correction (GEC) model using Pytorch. The develping model is a convolutional neural network, like Statistical Machine Translation (SMT). I have read the shared embedding will improve GEC model results, but I did’t find a good artical explain what is the shared embedding technequne and how to implement.