Hi

Does anyone know how we can enforce the sum of few parameters to be equal to a given value?

Hi Frida!

You can add an L1 or L2 penalty (regularizing term) to your loss

function:

```
l1_reg = torch.abs (param1 + param2 + param3 - desired_sum)
# or l2_reg = (param1 + param2 + param3 - desired_sum)**2
loss_total = loss + l1_reg
# or loss_total = loss + l2_reg
```

Then call `loss_total.backward()`

and optimize as you normally would.

Best.

K. Frank

Any idea why it becomes lower and then becomes greater, although is greater then the loss before?

Hi Frida!

The short answer that I should have included a regularization

“importance factor” as a hyperparameter:

`loss_total = loss + reg_weight * l1_reg`

My apologies for leaving out this important detail.

I am assuming that the “it” that “becomes lower and then becomes

greater” is `l1_reg`

(or `l2_reg`

, if that’s the one you’re using).

First, some context:

If you use just `loss`

for training, training will push your model

parameters to make `loss`

smaller (presumably making better

predictions). But the training won’t care what your parameter

sum is.

Similarly, if you use just `l1_reg`

for training, you will push your

parameter sum to its desired value. But the training won’t care

about `loss`

, and your model won’t learn anything about making

good predictions.

These two goals – “good predictions” and “desired sum” – can

potentially be in competition with one another. `reg_weight`

tunes

this trade-off.

Add `reg_weight`

to your training, and try increasing it until you get

adequately close to `desired_sum`

.

(It is sometimes helpful to start with a smallish value of `reg_weight`

and increase it gradually as training progresses. But there’s no real

wisdom about whether or not this will be helpful – like so much in

neural networks, sometimes you just have to try it.)

Best.

K. Frank