Initialising hidden states in RNN

In the pytorch word language model example, the RNN model class explicitly defined a function ‘init_hidden’ which assigns zero tensors. This function is then called in the main script before starting each epoch of training.

However in the SNPI example this init_hidden state is not defined in the model and nor are the hidden states zeroed before each epoch of training.

Can I confirm this is because in the latter newer example utilised auto initialisation of the hidden states (to zeros)?

That is correct for the language model example, which was written October '16 while initial hidden states were added January '17. The SNPI example however is using manually zeroed hidden states.

Ah yes, thank you.

Am I right in thinking the way manual zeroing of the hidden states is done in the newer tutorial also performs the repackage_hidden() function in the main method of the word language model tutorial?