Number of function evaluations/gradient by torch.optim

Hello,

I am using torch.optim.LBFGS and I want to get the number of function evaluations performed during an optimization. Currently I am using:

optimizer = torch.optim.LBFGS(x, **optim_params)
stateOneEpoch = optimizer.state[optimizer._params[0]]
nfeval = stateOneEpoch[“func_evals”]

which gives a reasonable number. However, this seems to be an undocumented hack. Is there a standard, and documented, way of otaining the number of function evaluations?

How can I get the number of gradient evaluations?

Thanks, Joaquin