Why my training loss does not decrease overtime

I got a very weird problem here. I use a very simple deep learning model with only linear layers. However, the training loss does not decrease at all during training.

But when I input some random inputs to my model, it works well… So I don’t think it is the model problem. Must be something wrong with my inputs. I load features from ‘.npy’ file.

Any suggestions would be greatly appreciated.
Thank you

There could be so many reasons why loss may not decrease overtime, varying from network design to input. Without looking at the data and network it is difficult to say what might be the problem.

Please post the dataloading and training code here.

Hi, I figured out myself. The reason is that my input features are not in the same range. I normalized them and it works now. Thank you.!