Hi, I’m interested in trying to combine loss criteria, and I made a little function, to try to do that. Here’s what DOESN’T work…

```
def calc_loss(Y_pred, Y_true, criteria, lambdas):
# Y_pred: the output from the network
# Y_true: the target values
# criteria: an iterable of pytorch loss criteria, e.g. [torch.nn.MSELoss(), torch.nn.L1Loss()]
# lambdas: a list of regularization parameters
assert len(criteria) == len(lambdas), "Must have the same number of criteria as lambdas"
loss = torch.zeros(1, requires_grad=True) # not sure how to properly initialize a single pytorch number for autograd
for i in range(len(criteria)):
loss += lambdas[i] * criteria[i](Y_pred, Y_true)
return loss
```

When I do that, I get the Runtime error at the “loss +=” line…

```
RuntimeError: a leaf Variable that requires grad has been used in an in-place operation.
```

So instead, here’s the *working* routine that I have right now, but it’s just not ‘elegant’ IMHO:

```
def calc_loss(Y_pred, Y_true, criteria, lambdas):
assert len(criteria) == len(lambdas), "Must have the same number of criteria as lambdas"
for i in range(len(criteria)):
if (0==i):
loss = lambdas[i] * criteria[i](Y_pred, Y_true)
else:
loss += lambdas[i] * criteria[i](Y_pred, Y_true)
return loss
```

For future reference how should you properly initialize a ‘zero’ loss of this type?

(I’m using Pytorch 0.4)

Thanks!

**PS**- Also tried a simple…

```
loss = torch.dot(lambdas, criteria(Y_pred, Y_true))
```

…but that was just wishful thinking, because the lambdas are ‘just numbers’, so you get `TypeError: 'list' object is not callable`

. Even if you wrap the lambdas in torch.tensor(), you get the same error.