How does the Pytorch implementation of backpropagation compared to the backpropagation algorithm described in the deep learning book?

How does the Pytorch implementation of backpropagation compare to the backpropagation algorithm described in the deep learning book?

It section 6.5.6 of the deep learning book a back-propagation algorithm is described.

However I would think that Pytorch implements a different algorithm. Because in Pytorch a tensor does not have access to its consumers, is that correct?

If it is different, then how does it compare?

Kind regards,
Jens