Freezing some weight in the layer during forward training

Hi all,

I’m doing a project and I’d like to freeze some weight and only update the others during training.

I wish I could use hook() and register_forward_pre_hook to achieve this since I’m learning using hook now. Below is my codes:

    def hook(self, model, inputs):
        with torch.no_grad():
            model.weight = model.weight * self.sparse_mask[self.type[model]]

    def register_hook(self,module):
        self.handle = module.register_forward_pre_hook(self.hook)

However, I got this error

TypeError: cannot assign ‘torch.FloatTensor’ as parameter ‘weight’ (torch.nn.Parameter or None expected)

I’v lost a day fixing this problem, I was wondering if there’s someone could help me. Welcome with any ideas

Thanks!

Hi Mandy,

I think this should work:

def hook(module, input):
    with torch.no_grad():
        module.weight.data = module.weight.data * self.sparse_mask[self.type[module]]

Thank you @spanev!! It works!!

Hi @spanev

I really appreciate for your help, but I got another one :frowning: :frowning:

def hook(module, input):
    with torch.no_grad():
        module.weight.data = module.weight.data * self.sparse_mask[self.type[module]]

I used exactly what you show me, and I got this error

RuntimeError: expected device cuda:0 and dtype Float but got device cpu and dtype Float

Could you please teach me how to fix this problem?

Any response will be appreciated!