The tutorial of LSTM does not init the hidden state

In the pytorch tutorial about LSTM, the code:

 # Step 1. Remember that Pytorch accumulates gradients.
 # We need to clear them out before each instance
model.zero_grad()

 # Step 2. Get our inputs ready for the network, that is, turn them into
# Tensors of word indices.
sentence_in = prepare_sequence(sentence, word_to_ix)
targets = prepare_sequence(tags, tag_to_ix)

# Step 3. Run our forward pass.
 tag_scores = model(sentence_in)

does not init the hidden state when the new sentence is coming. Is it right?