One loss not converge out of Two loss

I have two losses L1 and L2. I train model by giving single image at a time. For some image, both the losses converge towards 0 but for some image L1 converge to significant value but L2 loss not decreasing after some epochs.

After increasing dataset, problem remains to those images for which L2 loss not decreasing. I work around for different learning rate and dense layer but problem still persist.

Which parameters should i need to try to reduce both the losses?

One aspect is the balancing (weighting) of the two losses: giving l2 a higher weight will prefer reducing it over achieving the same reduction of l1.

Another thing to check is whether l2 has actually achieved a minimum, egif the gradients of l2are (almost) zero. In that case you would want to revisit the modelling. As always, overfitting a single batch is a good sanity check here.

Best regards

Thomas

Giving higher weights to l2 loss doesn’t improve the result. I need to check the gradients of different layer. Could you please give reference code or link to plot and visualize the gradients of different layer?