A question about backward gradient when training a model

This is not a question about the usage of pytorch, but a question about training problem, I am sorry but I want to get some advice from this forum.
I am training a model with my custom loss function, I think there is no error about the loss function, but when I check the training gradients, I found some thing odd, the message of gradients is:
norm: 0.000244 min: -0.000002 max: 0.000001 median: -0.000000 mean: 0.000000 std: 0.000001
And when I amplify the loss weight to 1000, the message is:
norm: 0.395154 min: -0.003929 max: 0.005707 median: 0.000015 mean: -0.000000 std: 0.000852
The average loss value in training dataset up to 24.0944.
Can you give me some advice about what’s wrong in my model? Thanks for your help.

Did you use the same input when you scaled up your loss?
Which loss function are you using?