AutoEncoder transfer learning: how to freeze encoder / decoder

Hi, I am working on anomaly detection via Auto Encoders.
So my network looks like this:

class net(nn.Module):
    def __init__()
    # from 32x128 -> 4x4 -> Flatten()
    self.encoder = nn.Sequential(...)
    # use a dense layer, which will be reshaped in the forward function
    self.embedding = nn.Linear(4*4, out_dim)
    # go from 4x4 -> 32x128
    self.decoder = nn.Sequential(...)
    # encode, embed, reshape, decode
    def forward(...) ...

Now I would like to train my representation of the encoder and decoder and transfer it on a new data set.
The only trainable layer should be the embedding.

I am a bit unsure how to do it

  1. What does it mean for my code in the abstract sense?
    • Should I have something like pretrain_model which fully trains and then something like actual_model = pretrain_model with some layers frozen?
  2. How do I actually freeze the encoder and decoder and let only the embedding learn
  3. Is it actually a good idea to let only the embedding learn?
  1. I don’t fully understand the question and am unsure why different variables are used.
  2. You can iterate the .parameters() of submodules and set their .requires_grad attribute to False in order to freeze them.
  3. I also don’t know as it would depend on your use case.