Hi,
I’m trying to create a conv1d layer with fixed weights but will allow gradients to pass through. I understand that you pass the network’s parameters to the optimizer and run optimizer.step()
but in this case how do you omit the conv1d layer? In particular, referring to the code below, I want to fix the weights of self.conv
but allow self.weight
to be updated by Autograd. How would I go about initialising self.conv
to [1, 2, 3, 4] and fixing it at that?
I read something about zeroing gradients after each backward but what if I want to keep the weight freezing to the customModule level of abstraction?
class customModule(nn.Module): def __init__(self): super(customModule, self).__init__() self.weight = Parameter(torch.Tensor(4)) self.conv = nn.Conv1d(1, 1, 5, 1, 2, 1, 1, False) # want to fix the weights to [1, 2, 3, 4] params = self.conv.parameters() # <--- how would I set this? def forward(self, input): z = torch.mm(input, self.weight) # e.g. 4x4 multiplied by 4x1 return self.conv(z)