Does this LSTM loop code break the computational graph in PyTorch?

The code below is from Sequence Models and Long Short-Term Memory Networks — PyTorch Tutorials 1.9.0+cu102 documentation

for i in inputs:
    # Step through the sequence one element at a time.
    # after each step, hidden contains the hidden state.
    out, hidden = lstm(i.view(1, 1, -1), hidden)

For me, it seems like handling the LSTM in this way breaks the computational graph as hidden keeps on getting overridden. Should all the hidden states not be stored in an array so the computational graph can be maintained, so backprop can flow through the hidden states?

it is not overwritten with python’s semantics; the function returns a tuple, it is deconstructed, and the name ‘hidden’ is reassigned, hiding the old object referenced by that name in current scope