Adding L1/L2 regularization in a Convolutional Networks in PyTorch?

I am new to pytorch and would like to add an L1 regularization after a layer of a convolutional network. However, I do not know how to do that.

The architecture of my network is defined as follows:

downconv = nn.Conv2d(outer_nc, inner_nc, kernel_size=4,
                         stride=2, padding=1, bias=use_bias)
downrelu = nn.LeakyReLU(0.2, True)
downnorm = norm_layer(inner_nc)
uprelu = nn.ReLU(True)
upnorm = norm_layer(outer_nc)
upconv = nn.ConvTranspose2d(inner_nc, outer_nc,
                                    kernel_size=4, stride=2,
                                    padding=1, bias=use_bias)
down = [downrelu, downconv]
up = [uprelu, upconv, upnorm]
model = down + up

And I used the following code to implement the regularizer:

xx = nn.Parameter(torch.from_numpy(np.ones((3,3))))
target = Variable(torch.from_numpy(np.zeros((3,3))))
l1_crit = nn.L1Loss()
l1_crit(xx, target)

l1_crit = nn.L1Loss(size_average=False)
reg_loss = 0
for param in model:
    print "PARAM: ", param
 reg_loss += l1_crit(param)

model = down + [submodule] + up + [nn.Dropout(0.5)]

But I get the following error:
TypeError: forward() takes exactly 3 arguments (2 given)

I think the problem is in my code that implements the L1 regularization. Can someone help me?

1 Like

You just input param and size_average in reg_loss+=l1_crit(param) without target.

You could implement L! regularization using something like example of L2 regularization.
For L1 regularization, you should change W.norm(2) to W.norm(p=1).

1 Like