Call .backward() to clear graph

I was reading this article in which it says the graph will be cleaned in the step loss.backward():

Is there any way that I could get access to the underlying graph to make sure it is freed?
I tried to check loss.grad_fn and found it stayed the same after backward(), although the id changed

x = torch.tensor([0.5, 0.75], requires_grad=True)
y = torch.tensor([0.1, 0.90], requires_grad=True)
z = torch.exp(x * y).sum()
print(z.grad_fn)
z.backward() 
print(z.grad_fn)

What this strictly means is the the references to the saved tensors are lost but the underlying graphs still hangs on in the memory. Are you asking something along the lines?
This might be the reason why the grad_fn remains intact as you checked.

Hi thank you. I was asking if I could check the references were indeed lost after I call backward(), like I detach a tensor and I could clearly see the difference

And how this step will save the memory if the graph still hangs there?

Yes. The references to the saved tensors are definitely lost after a backward call unless you specify retain_graph=True as an argument to the backward method which you shouldn’t unless necessary.

To see this, try a second backward call right after the first, and the error message shall convey the same.

As the references to the saved tensors are lost, memory is aggressively freed.

What does “references” explicitly mean? Does it mean saved intermediate values of the graph?

How things work internally (what data structures are used etc.) is something I’m not completely sure about.
But yes, “references” to the tensors that’ll be required for gradient computation on a backward call are saved as the graph builds in the forward pass.

You might also like to read more on how PyTorch uses dynamic computational graphs.