Certain optimizers, particularly those which have an internal iterative structure, require a LoopClosure object to their step() method. (LBFGS and perhaps some new-ish, relatively sophisticated optimizers come to mind.) In fact, the new Pytorch 1.5 API appears to now require an object of this sort be passed to Optimizer objects; this appears to be coupled to the demise of LossClosureOptimizer as a separate base class. Many optimizers don’t actually use this, of course, in which case it can be defaulted to nullptr.
Instances of such objects I have seen in the frontend source code expect something like
LoopClosure = std::function<Tensor()>
and explicit invocations within optimizer code simply execute this without arguments, as prescribed by this signature.
A loss calculation requires comparing computed results with targets, so presumably these objects are expected to be something like lambdas with ‘this’ capture in order to work with this signature. I am looking for an actual example of the definition of such a LoopClosure, usable for example with the LBFGS optimizer. This is a bit beyond the scope of the tutorials I have come across, and I would appreciate any pointers (shared, unique, intrusive, whatever) to possible enlightenment.