How to fix weights in layer

Hi,

I’m trying to create a conv1d layer with fixed weights but will allow gradients to pass through. I understand that you pass the network’s parameters to the optimizer and run optimizer.step() but in this case how do you omit the conv1d layer? In particular, referring to the code below, I want to fix the weights of self.conv but allow self.weight to be updated by Autograd. How would I go about initialising self.conv to [1, 2, 3, 4] and fixing it at that?

I read something about zeroing gradients after each backward but what if I want to keep the weight freezing to the customModule level of abstraction?

class customModule(nn.Module):
    def __init__(self):
        super(customModule, self).__init__()
        self.weight = Parameter(torch.Tensor(4))
        self.conv = nn.Conv1d(1, 1, 5, 1, 2, 1, 1, False) # want to fix the weights to [1, 2, 3, 4]
        params = self.conv.parameters() # <--- how would I set this?
        
    def forward(self, input):
        z = torch.mm(input, self.weight) # e.g. 4x4 multiplied by 4x1
        return self.conv(z)

Have you try to using:

param.require_grad = False

Is similar to finetuning, but with the last layer fixed.
http://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html#finetuning-the-convnet

1 Like

Check out my tutorial, it covers this.

4 Likes

Hi all,
I want to know what if we do not add the parameters that would be fixed to the optimizer? Is it equal to fixed the weight or will throw some error? Thanks.