GAN - If we want to maximize the objective function, why we use backward?

I looked on pytorch tutorial on GAN models: DCGAN TUTORIAL

According to this article we want to maximize the objective function (G).

In the code they used

errG.backward()

Now I’m confused.
If we use backward() it will calculate the gradient down the hill,
but we want to calculate the gradient up the hill (and get the maximum of the objective function)

What am I missing ?