How to train RNNs with mini-batches?

Hi,
I am following http://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html#example-an-lstm-for-part-of-speech-tagging for an introduction to LSTMs.
Now I am wondering how would one change this tutorial to include mini-batch training (with torch.util.data.Dataloader) in the most efficient way? Do I have to manually iterate over the LSTM output to feed the linear layer? (In the given implementation the sequence-dimension from the LSTM is used as the batch-dimension of the linear layer)