Pytorch L1 layer

Hello,

I’m trying to replicate nn.L1Penalty net in pytorch just like line 89 in https://github.com/cvondrick/videogan/blob/master/main.lua

I’ve read that I can add L1 loss in the network. But in this case, it is too complicated. Can anybody suggest me how to add L1 layer at some part of the network?

Thanks

What is a L1 layer? Are you trying to add a layer that computes L1 loss between your data and some learnable weights?

If so you can do the following:

# declare weights somewhere
weights = Variable(torch.randn(...), requires_grad=True)

# In your training process, add:
output = torch.functional.L1_loss(input, weights)

Just like how he did in Line 89. Can you explain how and why he is doing it? I’m not familiar with Torch7.

I’m not familiar with Torch7 either. However, I think the L1Penalty refers to L1 regularization, not to the L1 loss function.

If we thinking about machine learning as learning some function, L1 regularization is used as a way to prevent overfitting by “smoothing” the function. In particular, it penalizes for the the overall magnitude of weights.

Here’s a good article on the difference between L1 regularization and L1 loss: http://www.chioka.in/differences-between-l1-and-l2-as-loss-function-and-regularization/ .

Most of the pytorch optimizers include some form of L2 regularization.

1 Like

Could you point to some examples of how one would use L1Penalty and L2 regularization in pytorch?

Many optimizers have a L2 penalty option: http://pytorch.org/docs/master/optim.html?highlight=torch%20optim#torch.optim.Adagrad . This adds a constant times the weights to the gradient of the loss (you can check this behavior in the source code).

You could probably add a manually defined L1/L2 regularization by adding the relevant values to the output of a loss function.