Based on what I have been reading here, one can get L2 regularization by providing a value other than 0 to the optimizer through the argument weigh_decay.
Yet, one may implement a custom loss function like this one where the L2 regularization is already taken into account:
class AutoRec_Loss(torch.nn.Module):
def __init__(self):
super(AutoRec_Loss,self).__init__()
def forward(self,predicted_ratings, real_ratings, weights, reg_strength):
ratings_loss = torch.norm(real_ratings - predicted_ratings)
# L2 regularization
weights_regularization = (reg_strength/2)*torch.norm(weight)
return ratings_loss + weights_regularization
What would happen if I set a value other than 0 to the underlying optimizer given this loss function?