Regularization in Torch

Hi, I just transfer from Tensorflow to Pytorch. One quick question about the regularization loss in the Pytorch,

Does Pytorch has something similar to Tensorflow to calculate all regularization loss automatically?
tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)

Or we need to implement it by ourselves?

1 Like

if you simply want to use it in optimization, you can use keyword weight_decay of torch.optim.Optimizer.

Thanks, if I want to output the penalty, for example l2 loss. Do you think the following workable, as a simple implemetation?

def get_reg_loss(model):
     reg_loss = 0

     for param in model.parameters():
         reg_loss += param**2

    _lambda = 0.001
    reg_loss += _lambda * reg_loss
    return reg_loss

I think it’s ok, but notice that it will also penalt bias.
maybe it is better to use named_parameters, it’s added in 0.1.12

import torch
import torch.nn as nn
import torch.optim as optim
m = nn.Sequential(
      nn.Linear(10, 20),
      nn.ReLU(),
      nn.Linear(20, 20),
      nn.ReLU(),
    )
weights, biases = [], []
for name, p in m.named_parameters():
   if 'bias' in name:
       biases += [p]
   else:
       weights += [p]

optim.SGD([
  {'params': weights},
  {'params': biases, weight_decay=0}
], lr=1e-2, momentum=0.9, weight_decay=1e-5)
6 Likes

Hi,

Thank you very much. Does the pretrained model has this functionality?

	model = models.resnet18(pretrained=True)

	num_ftrs = model.fc.in_features
	model.fc = nn.Linear(num_ftrs, 2)
	model = model.cuda()

	weights, biases = [], []
	for name, p in model.named_parameters():
		if 'bias' in name:
			biases += [p]
		else:
			weights += [p]

This will gives me a bug:
AttributeError: 'ResNet' object has no attribute 'named_paramenters'

you have a typo
named_paramenters->named_parameters