P.grad used in SGD v/s flat_grad used in LBFGS

Dear all,
Could you please help me in understanding the code in LBFGS & SGD py files ?
I need to understand whether
p.grad which is used in SGD refers to stochastic gradient and
the gradient term flat_grad which is used in LBFGS.py refers to deterministic gradient.
Could you please comment whether this is correct?
How are you calculating the stochastic gradient in the stochastic gradient descent optimizer?