I am trying to create a compound loss function where the first part is MSELoss and the second part is the L1-norm regularization of the model’s parameters
The fest part is simple
MSEloss = nn.MSELoss()
loss = MSEloss(rec_x, x)
But how to attach the second part?
I appreciate your help!
Thank you for your answer! I didn’t mean to use L1 loss (where predicted and actual values are in use) I need to use L1 norm regularization. My goal is to fine high weights of the model.
import torch
from torch.autograd import Variable
## x in range [0, 1]
x = torch.rand(3,2)
x = Variable(x, requires_grad=True)
loss = torch.sum(torch.abs(x))
loss.backward()
## gradient should be all one
x.grad.data
I am trying to do the same thing as your question, so i write the following code, but it don’t work. Have you find any solution?
loss_func = t.nn.MSELoss()
optimizer = t.optim.SGD(net.parameters(), lr)
# train the neural network
for epoch in range(EPOCH):
for i, data in enumerate(train_loader):
inputs, labels = data
inputs, labels = Variable(inputs), Variable(labels)
prediction = net(inputs)
loss = loss_func(prediction, labels)
for name, param in net.named_parameters():
if 'weight' in name:
L1_1 = Variable(param, requires_grad=True)
L1_2 = t.norm(L1_1, 1)
L1_3 = L1_lambda * L1_2
loss = loss + L1_3
optimizer.zero_grad()
loss.backward()
optimizer.step()