I’m currently working on a regression model with input data of range(1, 26) and target data of range(1, 42) and I discovered that when I normalize the data to be within range(0, 1) and set learning rate to 0.001 the loss reduces by a huge factor after one epoch it’s so unbelievable than when I don’t normalize the training data (for eg: without normalization, loss can 5.0 and with normalization, loss can be 0.01 all after 1st epoch).

So this leaves me wondering could it be due to a relationship between the normalized input and learning rate or loss and learning rate tho, I don’t think the loss should change whether normalized or not coz the target data too was normalized accordingly

So what could be the cause?

I’m not sure, if I understand this sentence correctly.

Are you comparing these use cases:

```
# case0
output = model(unnormalized_input)
loss0 = criterion(output, unnormalized_target)
# case1
output = model(normalized_input)
loss1 = criterion(output, normalized_target)
```

If so, then I would expect that `loss1`

would be potentially smaller than `loss0`

, as the magnitude of the output and target is most likely smaller (assuming you are using e.g. `nn.MSELoss`

).

Ok I think I understand now

I’ll try using the smoothL1loss instead coz the MSEloss is leading to exploding and vanishing gradients coz of the way it penalize the algorithm based on losses greater than 1 and losses less than zero