Loss in not decreasing for fusion netwok

Hi, I am creating network consisting of two modules where each of them have their own separate classifier. At the end I am adding up the weighted loss of both the modules and finally using it for backpropagation.
But when I am training the model I could observe that the loss for only one module i.e. L1 is decreasing but not for the other one. I tried updating the learning rate also but it did not help much.
I am not sure what is the issue behind it but I would like to know whether the approach that I am following is correct or not. Please provide your suggestion and advice.

Below is the high level snaphsot for architecture I am creating.

This is with reference to previous question.

I am trying to train the entire netork which consists of two modules with their own classifiers and I observed that both of them have their own convergence speed. Moreover they also need different learning rates because I tried with different possible combinations still the loss of only first module is decreasing but not the other. I wanted to know how will I find common learning rate in such case? Is there any existing way to do so?