LBFGS converges but stops early

Hi,

I am new to working with the LBFGS optimizer and and have found to to get incredibly impressive results. In cases I have applied it to it, it has far outperformed other optimizers such as Adam. When applying it to a new and far more complex case to the previous ones it also converges impressively, however, when getting close to the solution that I am looking for, the training process stops.

I have done some research and some experimentation with the LBFGS method and have concluded that this premature termination can be altered by tolerance_grad and tolerance_change. I have set these to 1e-200 and 1e-400 respectively, and it is true that the process lasts longer, but nevertheless it still terminates prematurely. Is there any way I can turn this automatic termination off, and replace it with a manually assigned termination condition? Perhaps setting the tolerance grad and change equal to 0 and adding the termination requirement into the optimizers closure function instead?

Thanks.

1 Like

Hi,

I am encountering a similar scenario. Did you find any thing interesting to resolve such an issue? Thanks for the post.

Hi,

sorry for my late response. I did not have the time to look into the algorithm properly to give a reason for the things I found but I found that by setting the tolerances to zero the algorithm stops when it no longer learns and be setting the tolerances to a negative number, the algorithm does not terminate although it no longer learns. The latter was clear from analyzing the running loss while the former I deduced from the latter.

The algorithm implements line search so I suppose this makes sense.

1 Like