Freezing Hidden layer Parameters in RNN


I have trained encoder-decoder for some data. Now I want to use same weights in the hidden layers of encoder and decoder for different data. Only change I need in embedding layer of encoder & decoder and output layer of decoder.
Network is same as this
Any pointers, how to do this?


you can copy the over from the Embedding layers of the trained encoder-decoder into the new one.