Fixed line layer

Can I make such a linear layer in which weights will not change?
I want to compress the output from 10 neurons to 3, but so that the linear layer is not trained.

self.out = nn.Linear(10, 3)

You can do:

for p in model.out.parameters():
    p.requires_grad = False

eval() won’t work in this case, as it switches the behavior of certain layers like batchnorm and dropout layers.
Set the requires_grad attribute to False for self.out.weight and self.out.bias to fix these parameters.

EDIT: I was too slow, as @spanev already added the right approach :wink:

2 Likes

Did you mean that?

def forward(self, x):
        out = self.fc1(x) #Linear 1
        out = self.relu1(out)
        out = self.bn1(out)
        
        out = self.fc1(out) #Linear 2
        self.out.weight.requires_grad = False
        self.out.bias.requires_grad = False
        
        return out

If you don’t want to train these parameters at all, follow @spanev’s approach and set the requires _grad attribute to False after creating an instance of the model.

I want to freeze only one linear layer. I simply gave a simplified example.

@spanev’s example only freezes the parameters of model.out, which would refer to your linear layer.

2 Likes

I can’t figure out where in the code I need to insert this.

AttributeError: 'model' object has no attribute 'out'

Sorry, got it


I wonder how to check if the weights are changing)))

Can you share the code of the model or at least the concerned part of the model?
If out is nested in another layer or class you would have to do something like model.submodule_name.out

1 Like