Sum of loss and then do back-propagation would work.
But, It depends how you want to model your loss function and depending on the loss function, your model will perform. e.g. you can model loss as follows
Where, alpha and beta are hyper-parameters.
When you have more than one loss, then usually we combine then using some function (Which will determine performance of your model) and then backprop.
Another example for loss function is following:
E.g. here, input1 and input2 were expected to give same output. In that case, you might want to penalize network more if Predicted_Values_1 and Predicted_Values_2 are different, Such a loss function will ensure network is trained to produce same output for both the inputs. (this loss function is on top of normal loss functions to achieve better results as similar output is expected and known)
Summary: Loss function should be modeled depending on what is the expected output of both the predicted values and how can we use them to make network learn faster.