I would like to freeze only one line of the embedding layer so that the weight of this line would not be updated after each epoch. Some people suggested using two separate embedding layers: one for trainable embeddings and another for the freezing embedding. But I am not sure how to get embeddings from two layers and concatenate them in a fast way. Any help would be apprieciated.
class myModel(nn.Module): def __init__(self, config): self.config = config self.trainable_rel_embeddings = nn.Embedding(self.config.relTotal, self.config.dim) # e.g. 200x100, where 200 is the number of trainable elements, 100 is the dimension self.freeze_rel_embeddings = nn.Embedding(1, self.config.dim) self.freeze_rel_embeddings.weight.data = self.config.freeze_init_rel_embs self.freeze_rel_embeddings.weight.requires_grad = False self.trainable_rel_embeddings.weight.data = self.config.init_rel_embs def forward(batch): ''' Batch are indexes, e.g. there is one batch with 5 groups: tensor([[ 3, 200], [ 3, 200], [ 3, 200], [188, 2], [188, 200]]) ''' #How could I get embedings whose indexes are less than 200 from self.trainable_rel_embeddings and get embeddings whose index is 200 from self.freeze_rel_embeddings?