I have 2 networks: Encoder (already trained) + Decoder.
I’d like to train both nets End2End.
But I’d like to keep The Decoder’s parameters “freeze” - no updates while training!
I’ve tried to set all the parameters of the Decoder with:
“require_grads = False” + “decoder.eval()” before training.
Also, I optimize over the Encoder’s parameters: so I’ve put them inside my optimizer object.
In the train / test process I set : “encoder.train()” & “encoder.eval()” respectively.
When I compare the Decoder’s parameters before & after training I found out that values were changed.
Please help me to understand what might went wrong.