Only 1 loss and multi-branch training

My network have 1 input fed into 3 branches processed differently, and only 1 loss function to constraint the output and GT, but I found that the gradient of the weights and the biases in the 1st branch are normal, however in the 2nd and 3rd branches, the gradient of the weights are always zero while the gradient of biases are normal.What’s happening?