Why does LBFGS stop before reaching the max_iter?

Hello, when training a neural network using LBFGS as the optimization algorithm, I’ve set the maximum number of iterations to 5000 and very small tolerance values, like 1e-222. However, sometimes it stops after just a few iterations, and other times it exceeds the maximum number of iterations. How can I address this issue?

Did you check whether it sees an exact 0 in the gradients?

Best regards

Thomas