Hi, I am creating network consisting of two modules where each of them have their own separate classifier. At the end I am adding up the weighted loss of both the modules and finally using it for backpropagation.
But when I am training the model I could observe that the loss for only one module i.e. L1 is decreasing but not for the other one. I tried updating the learning rate also but it did not help much.
I am not sure what is the issue behind it but I would like to know whether the approach that I am following is correct or not. Please provide your suggestion and advice.
Below is the high level snaphsot for architecture I am creating.