I am new to working with the LBFGS optimizer and and have found to to get incredibly impressive results. In cases I have applied it to it, it has far outperformed other optimizers such as Adam. When applying it to a new and far more complex case to the previous ones it also converges impressively, however, when getting close to the solution that I am looking for, the training process stops.
I have done some research and some experimentation with the LBFGS method and have concluded that this premature termination can be altered by tolerance_grad and tolerance_change. I have set these to 1e-200 and 1e-400 respectively, and it is true that the process lasts longer, but nevertheless it still terminates prematurely. Is there any way I can turn this automatic termination off, and replace it with a manually assigned termination condition? Perhaps setting the tolerance grad and change equal to 0 and adding the termination requirement into the optimizers closure function instead?