How can i freeze weights in element-wise

i want to fix weights which have a value of ‘zero’ in model.parameters() (model is a neural network) it means i have to freeze it in element-wise so i made a code below.

in the code, model.parameters() will be input of ‘Lparam’ and when i ran the code, i got an error like 'nonetype has no attribute ‘zero_()’

and when i try another trick like requires_grad=False, error says like, “only leaf variables can use requires_grad

then how can i approach leaf variables to fix the zero weights in the computation graph? here is my code.

The conceptually clean way to fix some part of weights is to have buffers (with self.register_buffer('weight_update_mask', the_mask) in the module initialization for the mask of what should be updated and the fixed weights and then in the forward use weight = torch.where(self.weight_update_mask, self.weight_param, self.weight_fixed).

Now you might get by with with torch.no_grad(): param[fixed_mask] = 0 instead. But I would view this as an optimization attempt for the former and it is not true that it is the same when you consider optimizers that don’t work fully elementwise but take the weight/gradient in its entirety into consideration (e.g. LARS/LAMB etc.).

Best regards

Thomas

4 Likes

i really appreciate your answer!
i’m very new to pytorch so it took a little of time to understand your comment and now i have a one more question about optimizer.

i’m trying to fix the training weight using an SGD optim with momentum and i wonder ‘fixing weight in the part of the nn.parameters() manually under the SGD optimizer with momentum’ is possible?