How to separate the parameters in optimisation (different hyperparameters for variables)?

How to separate the parameters in optimization?

def initialize_parameters(self):

        #user embedding, U
        self.U = nn.Embedding(self.num_user, self.edim_user)

        #item embedding, V
        self.V_d1 = nn.Embedding(self.num_item_d1,  self.edim_item)
        self.V_d2 = nn.Embedding(self.num_item_d2,  self.edim_item)

        #domain1
        self.weights_d1 = nn.ParameterList([nn.Parameter(torch.normal(mean=0, std=self.std, size=(self.layers[l], self.layers[l+1]), requires_grad=True)) for l in range(len(self.layers) - 1)])
        self.biases_d1 = nn.ParameterList([nn.Parameter(torch.normal(mean=0, std=self.std, size=(self.layers[l+1],), requires_grad=True)) for l in range(len(self.layers) - 1)])

        #domain2
        self.weights_d2 = nn.ParameterList([nn.Parameter(torch.normal(mean=0, std=self.std, size=(self.layers[l], self.layers[l+1]), requires_grad=True)) for l in range(len(self.layers) - 1)])
        self.biases_d2 = nn.ParameterList([nn.Parameter(torch.normal(mean=0, std=self.std, size=(self.layers[l+1],), requires_grad=True)) for l in range(len(self.layers) - 1)])

        #shared
        self.weights_shared = nn.ParameterList([nn.Parameter(torch.normal(mean=0, std=self.std, size=(self.layers[l], self.layers[l+1])), requires_grad=True) for l in range(self.cross_layers)])

        optimizer = torch.optim.SGD(self.parameters(), lr=self.learning_rate)

for name in model.state_dict():
    print(name)

OUTPUT

U.weight
V_d1.weight
V_d2.weight
weights_d1.0
weights_d1.1
weights_d1.2
weights_d1.3
biases_d1.0
biases_d1.1
biases_d1.2
biases_d1.3
weights_d2.0
weights_d2.1
weights_d2.2
weights_d2.3
biases_d2.0
biases_d2.1
biases_d2.2
biases_d2.3
weights_shared.0
weights_shared.1
weights_shared.2

What should I do if I want to regularise only the shared weights, i.e. weights_shared.0, weights_shared.1, weights_shared.2…
Using nn.Module’s default self.parameters(), all recognised params will be passed altogether.
Is there any way I could split them or specify it separately?
Also can someone confirm if this is the best way to initialise weights/biases manually?

I appreciate all helps and suggestions, Thank you!!

Do you mean to ask how to train only specific layers of a neural network? If so, one could try something like this:

for param in model.parameters():
   param.requires_grad = False

This way you can freeze or unfreeze whichever layers you want.

1 Like

no, I mean I want different hyper parameters for different layer.
say,
layer 1: lr=0.01
layer2 : lr=0.01, weight_decay = 0.01

I have no idea how to specify this since self.parameters() pass all in once.

Oh, in that case, check out the Per-parameter options here.

Essentially, you need to create different parameter groups with different hyperparameters and pass them as a dict to the optimizer instead of model.parameters()

For instance,

opt = optim.SGD([{'params': layer_1.parameters(), 'lr': 0.01}, {'params': layer_2.parameters(), 'lr': 0.01, 'weight_decay': 0.01}])

I hope that helps.