Kernel Regularizer

Hi @ptrblck, thanks for the feedback, yeah I have accessed it through index in my code.
I had encountered an error when I implemented it like this:

l2_reg = None

for i in model.named_parameters():
   if "layer_name.weight" in i[0]:
      if l2_reg is None:
         l2_reg = i[1].norm(2)**2
      else:
        l2_reg = l2_reg + i[1].norm(2)**2
batch_loss = some_loss_function + l2_reg * reg_lambda
batch_loss.backward()

RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph = True when calling backward the first time"
After adding this line batch_loss.backward(retain_graph=True) this works.
Though there exists a direct thread, may I take this opportunity to ask you if it is possible to avoid the use of retain_graph=True in the above code? I have read this increases the training time in each consecutive training iteration.Hence, I would like to avoid the use if possible.