Hi, I am working on anomaly detection via Auto Encoders.
So my network looks like this:
class net(nn.Module):
def __init__()
...
# from 32x128 -> 4x4 -> Flatten()
self.encoder = nn.Sequential(...)
# use a dense layer, which will be reshaped in the forward function
self.embedding = nn.Linear(4*4, out_dim)
# go from 4x4 -> 32x128
self.decoder = nn.Sequential(...)
# encode, embed, reshape, decode
def forward(...) ...
Now I would like to train my representation of the encoder and decoder and transfer it on a new data set.
The only trainable layer should be the embedding.
I am a bit unsure how to do it
- What does it mean for my code in the abstract sense?
- Should I have something like
pretrain_model
which fully trains and then something likeactual_model = pretrain_model
with some layers frozen?
- Should I have something like
- How do I actually freeze the encoder and decoder and let only the embedding learn
- Is it actually a good idea to let only the embedding learn?