Hello, I wanted to understand how to implement kernel regularizer(parameter in Keras/Tensorflow layer) in a layer in PyTorch. I saw examples of how to implement regularizer for overall loss, but could not find the relevant documentation for this.
l2_reg = None
for i in model.named_parameters():
if "layer_name.weight" in i:
if l2_reg is None:
l2_reg = i.norm(2)**2
else:
l2_reg = l2_reg + i.norm(2)**2
batch_loss = some_loss_function + l2_reg * reg_lambda
batch_loss.backward()
The code snippet looks generally correct.
One minor issue: named_parameters() will return a tuple as name and param, so you might need to use these two variables in the for loop or alternatively unpack i.
Hi @ptrblck, thanks for the feedback, yeah I have accessed it through index in my code.
I had encountered an error when I implemented it like this:
l2_reg = None
for i in model.named_parameters():
if "layer_name.weight" in i[0]:
if l2_reg is None:
l2_reg = i[1].norm(2)**2
else:
l2_reg = l2_reg + i[1].norm(2)**2
batch_loss = some_loss_function + l2_reg * reg_lambda
batch_loss.backward()