Question about rnn hidden layers

From tutorial, it define a gru with 1 hidden layer, then in forward do a for loop to implement the hidden layers. https://github.com/pytorch/tutorials/blob/master/intermediate_source/seq2seq_translation_tutorial.py#L350-L351

Another way is to define a gru with 3 hidden layers directly and use it.

What’s the different between these two ways?