MLP regression model always output same value (approaching zero)

Thank you very much! One thing I noticed during experiments with adadelta yesterday is: when I increase the penalty (for example, by increasing the weight_decay (L2 penalty)), the outputs have a trend to perform in the way that all outputs are the same regardless of the inputs, when the penalty is decreased, the model tends to overfit. Anyway, thank you for your patience and help!

1 Like