I am using
torch.optim.LBFGS and I want to get the number of function evaluations performed during an optimization. Currently I am using:
optimizer = torch.optim.LBFGS(x, **optim_params)
stateOneEpoch = optimizer.state[optimizer._params]
nfeval = stateOneEpoch[“func_evals”]
which gives a reasonable number. However, this seems to be an undocumented hack. Is there a standard, and documented, way of otaining the number of function evaluations?
How can I get the number of gradient evaluations?