Use of LossClosure in optimizers

Certain optimizers, particularly those which have an internal iterative structure, require a LoopClosure object to their step() method. (LBFGS and perhaps some new-ish, relatively sophisticated optimizers come to mind.) In fact, the new Pytorch 1.5 API appears to now require an object of this sort be passed to Optimizer objects; this appears to be coupled to the demise of LossClosureOptimizer as a separate base class. Many optimizers don’t actually use this, of course, in which case it can be defaulted to nullptr.

Instances of such objects I have seen in the frontend source code expect something like

LoopClosure = std::function<Tensor()>

and explicit invocations within optimizer code simply execute this without arguments, as prescribed by this signature.

A loss calculation requires comparing computed results with targets, so presumably these objects are expected to be something like lambdas with ‘this’ capture in order to work with this signature. I am looking for an actual example of the definition of such a LoopClosure, usable for example with the LBFGS optimizer. This is a bit beyond the scope of the tutorials I have come across, and I would appreciate any pointers (shared, unique, intrusive, whatever) to possible enlightenment.

Thanks,
Eric

The test provide this code snippet. Would that help?

Yes, that looks like an excellent model for what I’m looking for. (I’m probing the source code using KDevelop, but didn’t think to try to go down the test tree–a good lesson.) Many thanks.

Usually new code won’t land without a proper test, so it’s my first place to look for the right usage. :wink:
Tutorials usually take a bit longer and might not cover all use cases.