Equal/scaled weight to multiple loss functions

Loss function = Loss1 + Loss2 + Loss3 +…

What’s the best way to make sure that equal weight (relative to the gradient vector) is given to each LossX no matter the magnitude of each? Should I do Loss1 * Loss2 * Loss3 or call backward multiple times to accumulate the gradient?

I may also wish to scale each loss function (equal to giving each a different learning rate). Merely adding them and scaling does not seem like a good solution since higher magnitude losses will be given more gradient influence even though they are all measuring different things. Hyperparameter search for scaling parameters does not appeal either, there should be a simple analytical solution to this.