Is the gradient produced by batch loss equal to the average gradient produced by corresponding samples

I wonder the two types of gradients is equal or not. I use SGD optimizer.
The first gradient is that, I use a batch and calculate the average_loss, then get gradient by average_loss.backward().
The second is I calculate loss of each sample and then get gradient, and average the gradients.