Outputs from a simple DNN are always the same whatever the input is

Could you check the weight and bias in both layers?
Sometimes, e.g. when the learning rate is too high, the model just learns the “mean prediction”, i.e. the bias is responsible for most of the prediction, while the weights and input became more or less useless.
For example when I was playing with a facial keypoint dataset, some models just predicted the “mean position” of the keypoints, regardless of the input image.

6 Likes