Custom loss proportionality

I wrote a custom loss function, loss= XXXX
I find that it didn’t work (loss did not reduce during training) but if I multiply the result for example by 2, then it does work.
loss=XXXX * 2.0

Does that make sense? Should it be sensitive like that?

Could you try to double the learning rate instead and check if you see a similar effect?
Also, was the loss constant using the original loss? How large were the gradients?