This might work in a general forum, but I am focusing on the C++ frontend, so:
From all I can tell, there is no mini-batch version of the LBFGS optimizer at the moment. (Please correct if not true.) My question is pretty simple: When creating a dataloader to use with this optimizer, do I simply create the same kind of Batch-type dataloader I would use with a stochastic optimizer, and force the batch size to equal the total training-set size? (I’m guessing I would have to Stack-map that as well.) Or is there an alternative approach to creating a dataloader for use with this optimizer?
Sorry if this is somehow trivial…