How do I freeze the specific weights in a layer?

Hey. It is not 100% clear for me what are you trying to achieve, but I answer as far as I understand.
Pytorch weights tensors all have attribute requires_grad. If set to False weights of this ‘layer’ will not be updated during optimization process, simply frozen.
You can do it in this manner, all 0th weight tensor is frozen:

for i, param in enumerate(m.parameters()):
    if i == 0:
        param.requires_grad = False

I am not aware of the method how you can do requires_grad = False for the slice of the weights. At least I can’t do it without pytorch complaining.

Anyway, you can zero some slice of the gradients before optimization step, so this exact slice of weights don’t changed after optimization step. Here is a dummy example:

import torch
m = torch.nn.Linear(4, 2)
opt = torch.optim.Adam(m.parameters())
x = torch.rand((1,4))
y = torch.tensor([[0, 1]], dtype=torch.float32)
crit = torch.nn.BCEWithLogitsLoss()

out = m(x)
loss = crit(y, out)
loss.backward()

for i, param in enumerate(m.parameters()):
    if i == 0:
        param.grad[:,1:] = torch.zeros_like(param.grad[:,1:])

opt.step()
print('after optimizer step')
for param in m.parameters():
    print(param)

After doing this, I can see that weights tensor[0] slice[:,1:] didn’t change after optimizer step.

Hope it helps :slight_smile:

4 Likes