How to freeze part of parameters for a embedding layer?

I would like to mean a unknown word by 0.

m.embedding.weight[1:] = index2vector
m.embedding.weight[0] = index2vector.mean(dim=0)
m.embedding.weight = Parameter(m.embedding.weight)

embedding_params = [id(p) for p in m.embedding.parameters()]
params = [p for p in m.parameters() if id(p) not in embedding_params and id(p) not in user_bias_params and p.requires_grad]
embedding_params = [p for p in m.parameters() if id(p) in embedding_params and p.requires_grad]

params_dict = [{'params': params, 'lr': LR},
            {'params': embedding_params, 'weight_decay': 1E-6},

index2vector is pretrained word vector.
but I want to just freeze m.embedding.weight[1:].

or any advice of method for representing unknown words.

A dirty but working hack would be to zero out the gradient of these parts right before the optimizer step.

1 Like

thanks a works.