Use pre-trained autoencoder for classification or regression

Hello!!

I trained an autoencoder and now I want to use that model with the trained weights for classification purposes. So, I suppose I have to freeze the weights and layer of the encoder and then add classification layers, but I am a bit confused on how to to this.

This is the AutoEncoder I trained

class AE(nn.Module):
    def __init__(self, **kwargs):
        super().__init__()
        self.encoder_hidden_layer = nn.Linear(
            in_features=kwargs["input_shape"], out_features=128
        )
        self.encoder_output_layer = nn.Linear(
            in_features=128, out_features=128
        )
        self.decoder_hidden_layer = nn.Linear(
            in_features=128, out_features=128
        )
        self.decoder_output_layer = nn.Linear(
            in_features=128, out_features=kwargs["input_shape"]
        )

    def forward(self, features):
        activation = self.encoder_hidden_layer(features)
        activation = torch.relu(activation)
        code = self.encoder_output_layer(activation)
        code = torch.relu(code)
        activation = self.decoder_hidden_layer(code)
        activation = torch.relu(activation)
        activation = self.decoder_output_layer(activation)
        reconstructed = torch.relu(activation)
        return reconstructed

Can someone help me?

Thank you

You could create a separate model then pass the autoencoder into it and load the weights in that. Then you could create a separate sequential layer with the classification layers and then in the forward pass you could pass the input through those two. Then too freeze the autoencoder weights in the optimizer you could just set the parameters as only the classification layers.

Can you show me in code how to do that?

Yes here is the model code you could use

class classification(nn.Module):
    def __init__(self, **kwargs):
        super().__init__()
        self.AE = AE(kwargs)
        self.AE.load_state_dict(FILENAME)
        self.classifier = nn.Sequential(nn.Linear(INPUT,OUPUT))
        ##You can use as many linear layers and other activations as you want
    def forward(self, input):
        x = self.AE(input)
        output = self.classifier(output)
        ##You can use a sigmoid or softmax here if neecd
        return output

and then the optimizer

optimizer = torch.optim.Adam(model.classifier.parameters(), lr=LR)

Thank you so much!!!