how a combined loss function like the following can be implemented?

Loss = loss1 * exp(-w1) + w1 + loss2 * exp(-w2) + w2

where w1 and w2 are also trainable parameters.

how a combined loss function like the following can be implemented?

Loss = loss1 * exp(-w1) + w1 + loss2 * exp(-w2) + w2

where w1 and w2 are also trainable parameters.

You can implement it just like that:

```
def custom_loss(loss1, loss2, w1, w2):
return loss1 * exp(-w1) + w1 + loss2 * exp(-w2) + w2
```

and make sure you define the tensors w1, w2 with requires_grad=True and pass them to your optimizer so that they can be optimized.

4 Likes

Hi @richard, thanks for the reply:

I made this customized loss. However, the optimizer does not optimize the w1 and w2. (I am using pytorch 0.3)

```
class MY_LOSS(nn.Module):
def __init__(self):
super(MY_LOSS, self).__init__()
self.loss1 = nn.CrossEntropyLoss()
self.loss2 = nn.MSELoss()
self.w1 = Variable(torch.Tensor(1), requires_grad=True).type(FLOAT)
self.w2 = Variable(torch.Tensor(1), requires_grad=True).type(FLOAT)
def forward(self, inp1, tar1, inp2, tar2):
loss1 = self.loss1(inp1, tar1)
loss2 = self.loss2(inp2, tar2)
combined_loss = loss1 * torch.exp(-self.w1) + self.w1 + loss2 * torch.exp(-self.w2) + self.w2
return combined_loss, loss1, loss2, self.w1, self.w2
```

1 Like

Did you pass them to the optimizer to be optimized?

I think it should be something like this:

optimizer =optim.Adam([net, MY_LOSS.w1, MY_LOSS.w2])