Hi I have two related question to ask…

Recently I am doing my project ,in this project I have loss function contains loss A(classification) and loss B(regression), There are only few classes but lots of boxes in my dataset which result in my model have a little hard time decreasing regression loss ,(Note the regression loss is actually decreasing,but it’s 3 or 4 times larger than classification)

And Here’s a first question :

In this kind of situation ,What am I supposed to do to make regression loss go down faster?

Second :

Since the gradient would be affected by upstream gradient ,So If I have a multiplier for regression loss formula , would this have direct impact on my gradient and make regression decrease faster?

for example : If my loss calculation looks like this :

```
reg_loss = torch.where(torch.le(reg_dif, 1.0),
0.5 *torch.pow(reg_dif, 2),
reg_dif - 0.5)
```

Should I simply multiply a const number and this would make the gradient different than original one(without multiplier)?

Thanks