How to Freeze weights of pretrained network while training 2 nets

Hi!

I have 2 networks: Encoder (already trained) + Decoder.
I’d like to train both nets End2End.
But I’d like to keep The Decoder’s parameters “freeze” - no updates while training!

  1. I’ve tried to set all the parameters of the Decoder with:
    “require_grads = False” + “decoder.eval()” before training.

  2. Also, I optimize over the Encoder’s parameters: so I’ve put them inside my optimizer object.
    In the train / test process I set : “encoder.train()” & “encoder.eval()” respectively.

When I compare the Decoder’s parameters before & after training I found out that values were changed.

Please help me to understand what might went wrong.

Thanks,
Or Rimoch

Hi @Or_Rimoch,

I assume that you mean:

Encoder + Decoder (already trained).

If you set Decoder’ parameters’ requires_grad to False and you don’t pass them to the optimizer, there shouldn’t be any way for them to get modified.
Do you have a link to the code or a simple repro?