Freezing Hidden layer Parameters in RNN

Hi,

I have trained encoder-decoder for some data. Now I want to use same weights in the hidden layers of encoder and decoder for different data. Only change I need in embedding layer of encoder & decoder and output layer of decoder.
Network is same as this
Any pointers, how to do this?

Thanks.

you can copy the .weight.data over from the Embedding layers of the trained encoder-decoder into the new one.