Layer by layer self-supervised learning

Hello Everyone,

I have a question regarding training the model one layer at a time in a self-supervised manner. For example, I need to train Alexnet first conv layer, then save the model. After that, load the model, freeze the weights and add another conv layer and keep on repeating the process until it trains on all the five conv layers.

I am not sure how to approach this task. I can think of saving the entire model instead of just saving the state_dict, and then pass the model to another class that is having one more conv layer. So, In this way, I am supposed to create 5 classes for each conv layer. In the final step, I need to load the last model contains all five layers and train it in a supervised way.
Is there any other better way to do this?.

Could anyone help me with this?.

Thanks in advance.

Regards,

Nikita

I have never try that but I think it may work

class Net(nn.Module):
    def __init__(self):
        super().__init__()
        self.body = nn.ModuleList() 

    def add_layer(self, layer):
        self.body.append(layer)

    def forward(self, x):
        return self.body(x)


pool = [nn.Conv2d(3, 32, 1), nn.Conv2d(32, 64, 1), nn.Conv2d(64, 128, 1)]
net = Net()
for i, layer in enumerate(pool):
    # 1. load state dict
    if i > 0:
        state_dict = torch.load("{}.pt".format(i - 1))
        print(state_dict.keys())
        net.load_state_dict(state_dict)

    # 2. add layer
    net.add_layer(layer)

    # 2. init optimizer(freeze weights), scheduler, and train
    pass 

    # 3. save dict
    torch.save(net.state_dict(), "{}.pt".format(i))

By the way, PGGAN is one of the “growing layer by layer” network. Here’s its implemetation, using add_module

Thank You so much for the idea. I would try this approach.