Is the SGD in Pytorch a real SGD?

Shouldn’t it just be called a “batch” gradient descent ?
(Apologies if I am wrong) If we compute the gradient using
$\theta_{k+1} = \theta_{k} + \eta \sum_{i=0}^m lossfn(output, prediction)$ (assuming no momentum)
where $m \in (0,N)$, are we not explicitly calculating the entire (averaged) gradient for that batch ?